id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
bigbio/bionlp_shared_task_2009 | 2022-12-22T15:43:48.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The BioNLP Shared Task 2009 was organized by GENIA Project and its corpora were curated based
on the annotations of the publicly available GENIA Event corpus and an unreleased (blind) section
of the GENIA Event corpus annotations, used for evaluation. | @inproceedings{kim-etal-2009-overview,
title = "Overview of {B}io{NLP}{'}09 Shared Task on Event Extraction",
author = "Kim, Jin-Dong and
Ohta, Tomoko and
Pyysalo, Sampo and
Kano, Yoshinobu and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop Companion Volume for Shared Task",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1401",
pages = "1--9",
} | null | 0 | 63 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2009
homepage: http://www.geniaproject.org/shared-tasks/bionlp-shared-task-2009
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- EVENT_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2009
## Dataset Description
- **Homepage:** http://www.geniaproject.org/shared-tasks/bionlp-shared-task-2009
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,EE,COREF
The BioNLP Shared Task 2009 was organized by GENIA Project and its corpora were curated based
on the annotations of the publicly available GENIA Event corpus and an unreleased (blind) section
of the GENIA Event corpus annotations, used for evaluation.
## Citation Information
```
@inproceedings{kim-etal-2009-overview,
title = "Overview of {B}io{NLP}{'}09 Shared Task on Event Extraction",
author = "Kim, Jin-Dong and
Ohta, Tomoko and
Pyysalo, Sampo and
Kano, Yoshinobu and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop Companion Volume for Shared Task",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1401",
pages = "1--9",
}
```
|
CarperAI/pile-v2-small-filtered | 2022-12-06T14:16:11.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:en",
"language:code",
"region:us"
] | CarperAI | null | null | null | 8 | 63 | ---
annotations_creators: []
language_creators:
- crowdsourced
language: ["en","code"]
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
## Dataset Description
A small subset in each dataset of `pile-v2`(~1000 samples) of [pile-v2]() dataset, each has 1,000 random samples from the original dataset. The dataset has 255MB of text (code and english).
## Languages
The dataset contains technical text on programming languages and natural language with the following subsets,
- Bible
- TED2020
- PileOfLaw
- StackExchange
- GithubIssues
- Opensubtitles
- USPTO
- S2ORC
- DevDocs
- CodePileReddit2022
- USENET
- GNOME
- ASFPublicMail
- PileV2Reddit2020
- CodePilePosts
- Discourse
- Tanzil
- arXiv
- UbuntuIRC
- PubMed
- CodePileReddit2020
- CodePileReddit2021
- GlobalVoices
- FreeLaw_Options
- PileV2Posts
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("CarperAI/pile-v2-small")
```
### How to use it
You can either load the whole dataset like above, or load a specific subset such as arxiv by specifying the folder directory:
```python
load_dataset("CarperAI/pile-v2-small", data_dir="data/arxiv")
```
|
ola13/small-the_pile-dedup | 2022-12-07T08:28:01.000Z | [
"region:us"
] | ola13 | null | null | null | 0 | 63 | Entry not found |
qwedsacf/ivypanda-essays | 2023-02-03T21:05:11.000Z | [
"region:us"
] | qwedsacf | null | null | null | 3 | 63 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Ivypanda essays
## Dataset Description
- **Homepage:** https://laion.ai/
### Dataset Summary
This dataset contains essays from [ivypanda](https://ivypanda.com/essays/).
## Dataset Structure
### Data Fields
`TEXT`: The text of the essay.<br/>
`SOURCE`: A permalink to the ivypanda essay page
|
nlphuji/whoops | 2023-08-18T23:06:45.000Z | [
"annotations_creators:crowdsourced",
"language_creators:found",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"commonsense-reasoning",
"explanation-generation",
"visual-commonsense-reasoning",
"compositionality",
"image-generation",
"visual-question-answering(VQA)",
... | nlphuji | null | null | null | 11 | 63 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
paperswithcode_id: whoops
pretty_name: WHOOPS!
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- commonsense-reasoning
- explanation-generation
- visual-commonsense-reasoning
- compositionality
- image-generation
- visual-question-answering(VQA)
- question-answering
- image-captioning
task_ids: []
# dataset files.
extra_gated_prompt: >-
# By clicking “Access repository“ below, you assert your intention to exclusively use this resource for research, not for commercial chatbot development, and agree to abide by the terms detailed in the [WHOOPS! license](https://whoops-benchmark.github.io/static/pdfs/whoops_license_agreement.txt). You may also view all instances through the [WHOOPS! Explorer](https://huggingface.co/spaces/nlphuji/whoops-explorer-full) and consult the accompanying [WHOOPS! Dataset card](https://huggingface.co/spaces/nlphuji/whoops-explorer-full/blob/main/README.md) prior to acceptance. If you are unsure about your specific case - do not hesitate to reach out: yonatanbitton1@gmail.com.
By clicking “Access repository” below, you confirm your understanding that for commercial models, this resource is permitted for use as a test set, but not as a training set. Please ensure adherence to the terms detailed in the [WHOOPS! license](https://whoops-benchmark.github.io/static/pdfs/whoops_license_agreement.txt). You may view all instances via the [WHOOPS! Explorer](https://huggingface.co/spaces/nlphuji/whoops-explorer-full) and refer to the [WHOOPS! Dataset card](https://huggingface.co/spaces/nlphuji/whoops-explorer-full/blob/main/README.md) prior to acceptance. If you are unsure about your specific case, don't hesitate to contact: yonatanbitton1@gmail.com.
---
# Dataset Card for WHOOPS!
- [Dataset Description](#dataset-description)
- [Contribute Images to Extend WHOOPS!](#contribute-images-to-extend-whoops)
- [Languages](#languages)
- [Dataset](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Data Loading](#data-loading)
- [Licensing Information](#licensing-information)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Citation Information](#citation-information)
## Dataset Description
WHOOPS! is a dataset and benchmark for visual commonsense. The dataset is comprised of purposefully commonsense-defying images created by designers using publicly-available image generation tools like Midjourney. It contains commonsense-defying image from a wide range of reasons, deviations from expected social norms and everyday knowledge.
The WHOOPS! benchmark includes four tasks:
1. A novel task of explanation-of-violation: generating a detailed explanation for what makes the image weird.
2. Generating a literal caption
3. Distinguishing between detailed and underspecified captions
4. Answering questions that test compositional understanding
The results show that state-of-the-art models such as GPT3 and BLIP2 still lag behind human performance on WHOOPS!.
* Homepage: https://whoops-benchmark.github.io/
* Paper: https://arxiv.org/pdf/2303.07274.pdf
* WHOOPS! Explorer: https://huggingface.co/spaces/nlphuji/whoops-explorer-full
* Normal vs. Wired Explorer: https://huggingface.co/spaces/nlphuji/whoops-explorer-analysis
* Point of Contact: yonatanbitton1@gmail.com
[//]: # (Colab notebook code for WHOOPS evaluation )
## Contribute Images to Extend WHOOPS!
Would you like to add a commonsense-defying image to our database? Please send candidate images to yonatanbitton1@gmail.com. Thanks!
### Languages
English.
## Dataset
### Data Fields
image (image) - The weird image.
designer_explanation (string) - Detailed single-sentence explanation given by the designer, explaining why the image is weird.
selected_caption (string) - The caption that was selected from the crowed collected captions.
crowd_captions (list) - Crowd collected captions, depicting whats been seen in the image.
crowd_explanations (list) - Crowd collected single-sentence explanations, explaining why the image is weird.
crowd_underspecified_captions (list) - Crowd collected under-specified captions, depicting what is seen in the image, without depicting the commonsense-violation.
question_answering_pairs (list) - Automatically generated Q-A pairs. FlanT5 XL was used to answer the questions and filter out instances where the BEM metric is above 0.1.
commonsense_category (string) - The commonsense category the images related to (Full categories list can be found in [paper](https://arxiv.org/pdf/2303.07274.pdf)).
image_id (string)- The unique id of the image in the dataset
image_designer (string) - The name of the image designer.
### Data Splits
There is a single TEST split.
Although primarily intended as a challenging test set, we trained on the WHOOPS! dataset to demonstrate the value of the data and to create a better model.
We will provide the splits in the future.
### Data Loading
You can load the data as follows (credit to [Winoground](https://huggingface.co/datasets/facebook/winoground)):
```
from datasets import load_dataset
examples = load_dataset('nlphuji/whoops', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
You can get `<YOUR USER ACCESS TOKEN>` by following these steps:
1) log into your Hugging Face account
2) click on your profile picture
3) click "Settings"
4) click "Access Tokens"
5) generate an access token
## Licensing Information
[CC-By 4.0](https://creativecommons.org/licenses/by/4.0/)
Additional license information: [license_agreement.txt](https://huggingface.co/datasets/nlphuji/whoops/blob/main/license_agreement.txt)
You may also view all instances through the [WHOOPS! Explorer](https://huggingface.co/spaces/nlphuji/whoops-explorer-full) and consult the accompanying [WHOOPS! Dataset card](https://huggingface.co/spaces/nlphuji/whoops-explorer-full/blob/main/README.md).
1. **Purpose:** The dataset was primarily designed for use as a test set.
2. **Commercial Use:** Commercially, the dataset may be used as a test set, but it's prohibited to use it as a training set.
3. **Rights on Images:** All rights to the images within the dataset are retained by the WHOOPS! authors.
If you are unsure about your specific case - do not hesitate to reach out: yonatanbitton1@gmail.com.
[//]: # (To evaluate WHOOPS! with a fine-tune BLIP2, we split the images in WHOOPS! into 5 cross- validation splits. For these 5 splits independently, we train supervised models using 60% of the data as training, 20% as validation, and 20% for test.)
## Annotations
We paid designers to create images, and supply explanations for what is making the image wierd.
We paid Amazon Mechanical Turk Workers to supply explanations, captions and under-specified captions for each image in our dataset.
## Considerations for Using the Data
We took measures to filter out potentially harmful or offensive images and texts in WHOOPS!, but it is still possible that some individuals may find certain content objectionable.
If you come across any instances of harm, please report them to our point of contact. We will review and eliminate any images from the dataset that are deemed harmful.
[//]: # (All images, explanations, captions and under-specified captions were obtained with human annotators.)
### Citation Information
@article{bitton2023breaking,
title={Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images},
author={Bitton-Guetta, Nitzan and Bitton, Yonatan and Hessel, Jack and Schmidt, Ludwig and Elovici, Yuval and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2303.07274},
year={2023}
} |
MU-NLPC/Calc-math_qa | 2023-10-07T21:24:18.000Z | [
"license:apache-2.0",
"arxiv:2305.15017",
"arxiv:1905.13319",
"region:us"
] | MU-NLPC | null | null | null | 2 | 63 | ---
license: apache-2.0
---
# Dataset Card for "Calc-math_qa"
## Summary
This dataset is an instance of math_qa dataset, converted to a simple HTML-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer of the mathematical problem (a number)
## Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction Process
We took the original math_qa dataset, parsed the nested formulas, linearized them into a sequence (chain) of operations, and replace all advanced
function calls (such as `circle_area`) with explicit elementary operations. We evaluate all the steps in each example and filter out examples if their
evaluation does not match the answer selected as correct in the data with a 5% tolerance. The sequence of steps is then saved in HTML-like language
in the `chain` column. We keep the original columns in the dataset for convenience.
You can read more information about this process in our [technical report](https://arxiv.org/abs/2305.15017).
## Content and Data splits
Content and splits correspond to the original math_qa dataset.
See [mathqa HF dataset](https://huggingface.co/datasets/math_qa) and [official website](https://math-qa.github.io/) for more info.
Columns:
- `question` - th description of a mathematical problem in natural language
- `chain` - Solution in the form of step-by-step calculations encoded in simple html-like language. computed from `annotated_formula` column
- `result` - the result of the problem as string (can be integer, floating number, fraction, ...)
- `result_float` - the result converted to a float
- `options` - dictionary with choices 'a' to 'e' as possible solutions
- `options_num` - same as 'options', but with simple parsing to extract the number from string. This is best-effort only - not all values are (or can be) extracted correctly
- `correct_option` - correct options, one of 'a', ..., 'e', should match with `result`
- `rationale` - human-annotated free-text reasoning that leads to the correct answer
- `annotated_formula` - human-annotated nested expression that (approximately) evaluates to the selected correct answer
- `linear_formula` - same as `annotated_formula`, but linearized. Provided by original math_qa authors
- `index` - index of the example in the original math_qa dataset
## Licence
Apache 2.0, consistently with the original dataset.
## Cite
If you use this version of dataset in research, please cite the [original MathQA paper](https://arxiv.org/abs/1905.13319), and also [our technical report](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@article{kadlcik2023calcx,
title={Calc-X: Enriching Arithmetical Chain-of-Thoughts Datasets by Interaction with Symbolic Systems},
author={Marek Kadlčík and Michal Štefánik},
year={2023},
eprint={2305.15017},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
awettig/Pile-FreeLaw-0.5B-6K-opt | 2023-07-10T19:34:17.000Z | [
"region:us"
] | awettig | null | null | null | 0 | 63 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6500934791
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1569004486
dataset_size: 6565880483
---
# Dataset Card for "Pile-FreeLaw-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
C-MTEB/CMedQAv2-reranking | 2023-07-28T07:17:06.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 63 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: test
num_bytes: 30417770
num_examples: 1000
download_size: 19720976
dataset_size: 30417770
---
# Dataset Card for "CMedQAv2-reranking"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tommert25/extradata0908 | 2023-09-26T15:12:36.000Z | [
"region:us"
] | Tommert25 | null | null | null | 0 | 63 | Entry not found |
dim/logic_tasks_ru | 2023-08-14T18:00:38.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 63 | ---
license: mit
dataset_info:
features:
- name: title
dtype: string
- name: task
dtype: string
- name: answer
dtype: string
- name: ok/trash
dtype: string
splits:
- name: train
num_bytes: 87178
num_examples: 99
download_size: 54016
dataset_size: 87178
---
Задачи с этого сайта https://www.potehechas.ru/zadachi/zadachi.shtml |
dim/wikihow_ru | 2023-08-15T12:11:59.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 63 | ---
license: mit
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 17666785.144215908
num_examples: 2058
download_size: 11421933
dataset_size: 17666785.144215908
---
|
open-llm-leaderboard/details_meta-llama__Llama-2-13b-hf | 2023-09-15T14:07:16.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 63 | ---
pretty_name: Evaluation run of meta-llama/Llama-2-13b-hf
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 123 configuration, each one coresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 6 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run (and is used to compute and display the agregated metrics on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_meta-llama__Llama-2-13b-hf\"\
,\n\t\"harness_drop_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-09-15T14:07:08.353318](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-13b-hf/blob/main/results_2023-09-15T14-07-08.353318.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.06501677852348993,\n\
\ \"em_stderr\": 0.0025249587272045365,\n \"f1\": 0.1951226929530205,\n\
\ \"f1_stderr\": 0.0030306263238973692\n },\n \"harness|drop|0\": {\n\
\ \"em\": 0.06501677852348993,\n \"em_stderr\": 0.0025249587272045365,\n\
\ \"f1\": 0.1951226929530205,\n \"f1_stderr\": 0.0030306263238973692\n\
\ }\n}\n```"
repo_url: https://huggingface.co/meta-llama/Llama-2-13b-hf
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|arc:challenge|25_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|arc:challenge|25_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|arc:challenge|25_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_drop_0
data_files:
- split: 2023_09_15T14_07_08.353318
path:
- '**/details_harness|drop|0_2023-09-15T14-07-08.353318.parquet'
- split: latest
path:
- '**/details_harness|drop|0_2023-09-15T14-07-08.353318.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_08T14_32_14.957248
path:
- '**/details_harness|drop|3_2023-09-08T14-32-14.957248.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-08T14-32-14.957248.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_08T14_32_14.957248
path:
- '**/details_harness|gsm8k|5_2023-09-08T14-32-14.957248.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-08T14-32-14.957248.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hellaswag|10_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hellaswag|10_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hellaswag|10_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-19T22:35:38.117975.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-23T17:28:00.015478.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T22:26:02.660247.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-19T22:35:38.117975.parquet'
- split: 2023_08_23T17_28_00.015478
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-23T17:28:00.015478.parquet'
- split: 2023_08_29T22_26_02.660247
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-29T22:26:02.660247.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-29T22:26:02.660247.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_08T14_32_14.957248
path:
- '**/details_harness|winogrande|5_2023-09-08T14-32-14.957248.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-08T14-32-14.957248.parquet'
- config_name: original_mmlu_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T19:56:56.621542.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_abstract_algebra_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_anatomy_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_astronomy_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_business_ethics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_clinical_knowledge_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_college_biology_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_college_chemistry_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_college_computer_science_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_college_mathematics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_college_medicine_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_college_physics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_computer_security_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_conceptual_physics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_econometrics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_electrical_engineering_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_elementary_mathematics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_formal_logic_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_global_facts_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_biology_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_chemistry_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_computer_science_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_european_history_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_geography_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_government_and_politics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_macroeconomics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_mathematics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_microeconomics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_physics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_psychology_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_statistics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_us_history_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_high_school_world_history_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_human_aging_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_human_sexuality_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_international_law_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_jurisprudence_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_logical_fallacies_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_machine_learning_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_management_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:management|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:management|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_marketing_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_medical_genetics_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_miscellaneous_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_moral_disputes_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_moral_scenarios_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_nutrition_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_philosophy_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_prehistory_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_professional_accounting_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_professional_law_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_professional_medicine_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_professional_psychology_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_public_relations_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_security_studies_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_sociology_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_us_foreign_policy_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_virology_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:virology|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:virology|5_2023-08-28T19:56:56.621542.parquet'
- config_name: original_mmlu_world_religions_5
data_files:
- split: 2023_08_28T19_56_56.621542
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T19:56:56.621542.parquet'
- split: latest
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T19:56:56.621542.parquet'
- config_name: results
data_files:
- split: 2023_08_19T22_35_38.117975
path:
- results_2023-08-19T22:35:38.117975.parquet
- split: 2023_08_23T17_28_00.015478
path:
- results_2023-08-23T17:28:00.015478.parquet
- split: 2023_08_28T19_56_56.621542
path:
- results_2023-08-28T19:56:56.621542.parquet
- split: 2023_08_29T22_26_02.660247
path:
- results_2023-08-29T22:26:02.660247.parquet
- split: 2023_09_08T14_32_14.957248
path:
- results_2023-09-08T14-32-14.957248.parquet
- split: 2023_09_15T14_07_08.353318
path:
- results_2023-09-15T14-07-08.353318.parquet
- split: latest
path:
- results_2023-09-15T14-07-08.353318.parquet
---
# Dataset Card for Evaluation run of meta-llama/Llama-2-13b-hf
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/meta-llama/Llama-2-13b-hf
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 123 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_meta-llama__Llama-2-13b-hf",
"harness_drop_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-15T14:07:08.353318](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-13b-hf/blob/main/results_2023-09-15T14-07-08.353318.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.06501677852348993,
"em_stderr": 0.0025249587272045365,
"f1": 0.1951226929530205,
"f1_stderr": 0.0030306263238973692
},
"harness|drop|0": {
"em": 0.06501677852348993,
"em_stderr": 0.0025249587272045365,
"f1": 0.1951226929530205,
"f1_stderr": 0.0030306263238973692
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
eduagarcia/OSCAR-2301-pt_dedup | 2023-08-28T16:55:02.000Z | [
"region:us"
] | eduagarcia | null | null | null | 0 | 63 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 61846407893
num_examples: 10888966
download_size: 28809168123
dataset_size: 61846407893
---
# Dataset Card for "OSCAR-2301_dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
crumb/openhermes-k8 | 2023-09-13T10:02:45.000Z | [
"region:us"
] | crumb | null | null | null | 1 | 63 | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 309315994
num_examples: 242831
download_size: 143821416
dataset_size: 309315994
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openhermes-k8"
[teknium/openhermes](https://hf.co/datasets/teknium/openhermes) clustered with 8 clusters, included are the centroids in 'centers.pt' |
mattlc/tranceformer_instruments_aurel | 2023-09-15T10:57:17.000Z | [
"region:us"
] | mattlc | null | null | null | 0 | 63 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float32
- name: sampling_rate
dtype: int64
- name: text
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 925296782
num_examples: 354
download_size: 463437404
dataset_size: 925296782
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tranceformer_instruments_aurel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zxvix/pubmed_subset_c4_10p | 2023-09-19T10:13:12.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 63 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2107664795.9043052
num_examples: 1110859
- name: test
num_bytes: 1024229
num_examples: 1000
download_size: 149396134
dataset_size: 2108689024.9043052
---
# Dataset Card for "pubmed_subset_c4_10p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Goorm-AI-04/Drone_Doppler | 2023-09-28T06:21:27.000Z | [
"region:us"
] | Goorm-AI-04 | null | null | null | 0 | 63 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
sequence:
sequence: float64
- name: label
dtype: int64
- name: type
dtype: string
splits:
- name: train
num_bytes: 75993012
num_examples: 13988
- name: test
num_bytes: 18998253
num_examples: 3497
download_size: 96723379
dataset_size: 94991265
---
# Dataset Card for "Drone_Doppler"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minhtu0408/gdsc-model-dataset | 2023-10-07T14:04:44.000Z | [
"region:us"
] | minhtu0408 | null | null | null | 0 | 63 | Entry not found |
paulesser/typo-sm-stable-xl | 2023-10-05T15:50:52.000Z | [
"region:us"
] | paulesser | null | null | null | 0 | 63 | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: conditioning_image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 10263112921.875
num_examples: 757175
download_size: 8088476656
dataset_size: 10263112921.875
---
# Dataset Card for "typo-sm-stable-xl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
saahith/EMSAssist-2 | 2023-10-07T04:11:54.000Z | [
"region:us"
] | saahith | null | null | null | 0 | 63 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcript
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 617788659.262
num_examples: 1122
- name: test
num_bytes: 1197091986.0
num_examples: 600
download_size: 1350447521
dataset_size: 1814880645.262
---
# Dataset Card for "EMSAssist-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
code_x_glue_cc_clone_detection_poj104 | 2023-03-13T11:02:07.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:code",
"license:c-uda",
"region:us"
] | null | Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.
We use POJ-104 dataset on this task. | @inproceedings{mou2016convolutional,
title={Convolutional neural networks over tree structures for programming language processing},
author={Mou, Lili and Li, Ge and Zhang, Lu and Wang, Tao and Jin, Zhi},
booktitle={Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},
pages={1287--1293},
year={2016}
} | null | 2 | 62 | ---
pretty_name: CodeXGlueCcCloneDetectionPoj104
annotations_creators:
- found
language_creators:
- found
language:
- code
license:
- c-uda
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- document-retrieval
dataset_info:
features:
- name: id
dtype: int32
- name: code
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 20179075
num_examples: 32500
- name: validation
num_bytes: 6382433
num_examples: 8500
- name: test
num_bytes: 7227506
num_examples: 12000
download_size: 8658581
dataset_size: 33789014
---
# Dataset Card for "code_x_glue_cc_clone_detection_poj_104"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104
### Dataset Summary
CodeXGLUE Clone-detection-POJ-104 dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104
Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.
We use POJ-104 dataset on this task.
### Supported Tasks and Leaderboards
- `document-retrieval`: The dataset can be used to train a model for retrieving top-k codes with the same semantics.
### Languages
- C++ **programming** language
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"code": "\nint f(int shu,int min)\n{ \n int k=1;\n if(shu < min)\n { \n k= 0; \n return k;\n } \n else\n {\n for(int i = min;i<shu;i++)\n { \n if(shu%i == 0)\n { \n k=k+ f(shu/i,i); \n } \n \n \n } \n return k; \n}\n} \n\nmain()\n{\n int n,i,a;\n scanf(\"%d\",&n);\n \n for(i=0;i<n;i++)\n {\n scanf(\"%d\",&a);\n \n if(i!=n-1) \n printf(\"%d\\n\",f(a,2));\n else\n printf(\"%d\",f(a,2)); \n \n \n \n } \n \n \n }",
"id": 0,
"label": "home"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### default
|field name| type | description |
|----------|------|----------------------------------------------|
|id |int32 | Index of the sample |
|code |string| The full text of the function |
|label |string| The id of problem that the source code solves|
### Data Splits
| name |train|validation|test |
|-------|----:|---------:|----:|
|default|32000| 8000|12000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@inproceedings{mou2016convolutional,
title={Convolutional neural networks over tree structures for programming language processing},
author={Mou, Lili and Li, Ge and Zhang, Lu and Wang, Tao and Jin, Zhi},
booktitle={Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},
pages={1287--1293},
year={2016}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
msr_text_compression | 2022-11-18T21:30:29.000Z | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-Open-American-National-Corpus-(OANC1)",
"language:en",
"license:other",
"region:us"
] | null | This dataset contains sentences and short paragraphs with corresponding shorter (compressed) versions. There are up to five compressions for each input text, together with quality judgements of their meaning preservation and grammaticality. The dataset is derived using source texts from the Open American National Corpus (ww.anc.org) and crowd-sourcing. | @inproceedings{Toutanova2016ADA,
title={A Dataset and Evaluation Metrics for Abstractive Compression of Sentences and Short Paragraphs},
author={Kristina Toutanova and Chris Brockett and Ke M. Tran and Saleema Amershi},
booktitle={EMNLP},
year={2016}
} | null | 2 | 62 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- other
license_details: Microsoft Research Data License Agreement
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-Open-American-National-Corpus-(OANC1)
task_categories:
- summarization
task_ids: []
pretty_name: MsrTextCompression
dataset_info:
features:
- name: source_id
dtype: string
- name: domain
dtype: string
- name: source_text
dtype: string
- name: targets
sequence:
- name: compressed_text
dtype: string
- name: judge_id
dtype: string
- name: num_ratings
dtype: int64
- name: ratings
sequence: int64
splits:
- name: train
num_bytes: 5001312
num_examples: 4936
- name: validation
num_bytes: 449691
num_examples: 447
- name: test
num_bytes: 804536
num_examples: 785
download_size: 0
dataset_size: 6255539
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563
- **Repository:**
- **Paper:** https://www.microsoft.com/en-us/research/wp-content/uploads/2016/09/Sentence_Compression_final-1.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains sentences and short paragraphs with corresponding shorter (compressed) versions. There are up to five compressions for each input text, together with quality judgements of their meaning preservation and grammaticality. The dataset is derived using source texts from the Open American National Corpus (ww.anc.org) and crowd-sourcing.
### Supported Tasks and Leaderboards
Text Summarization
### Languages
English
## Dataset Structure
### Data Instances
It contains approximately 6,000 source texts with multiple compressions (about 26,000 pairs of source and compressed texts), representing business letters, newswire, journals, and technical documents sampled from the Open American National Corpus (OANC1).
- Each source text is accompanied by up to five crowd-sourced rewrites constrained to a preset
compression ratio and annotated with quality judgments. Multiple rewrites permit study of the impact of operations on human compression quality and facilitate automatic evaluation.
- This dataset is the first to provide compressions at the multi-sentence (two-sentence paragraph)
level, which may present a stepping stone to whole document summarization.
- Many of these two-sentence paragraphs are compressed both as paragraphs and separately sentence-bysentence, offering data that may yield insights
into the impact of multi-sentence operations on human compression quality.
| Description | Source | Target | Average CPS | Meaning Quality | Grammar Quality |
| :------------- | :----------: | -----------: | -----------: | -----------: | -----------: |
| 1-Sentence | 3764 | 15523 | 4.12 | 2.78 | 2.81 |
| 2-Sentence | 2405 | 10900 | 4.53 | 2.78 | 2.83 |
**Note**: Average CPS = Average Compressions per Source Text
### Data Fields
```
{'domain': 'Newswire',
'source_id': '106',
'source_text': '" Except for this small vocal minority, we have just not gotten a lot of groundswell against this from members, " says APA president Philip G. Zimbardo of Stanford University.',
'targets': {'compressed_text': ['"Except for this small vocal minority, we have not gotten a lot of groundswell against this," says APA president Zimbardo.',
'"Except for a vocal minority, we haven\'t gotten much groundswell from members, " says Philip G. Zimbardo of Stanford University.',
'APA president of Stanford has stated that except for a vocal minority they have not gotten a lot of pushback from members.',
'APA president Philip G. Zimbardo of Stanford says they have not had much opposition against this.'],
'judge_id': ['2', '22', '10', '0'],
'num_ratings': [3, 3, 3, 3],
'ratings': [[6, 6, 6], [11, 6, 6], [6, 11, 6], [6, 11, 11]]}}
```
- source_id: index of article per original dataset
- source_text: uncompressed original text
- domain: source of the article
- targets:
- compressed_text: compressed version of `source_text`
- judge_id: anonymized ids of crowdworkers who proposed compression
- num_ratings: number of ratings available for each proposed compression
- ratings: see table below
Ratings system (excerpted from authors' README):
- 6 = Most important meaning Flawless language (3 on meaning and 3 on grammar as per the paper's terminology)
- 7 = Most important meaning Minor errors (3 on meaning and 2 on grammar)
- 9 = Most important meaning Disfluent or incomprehensible (3 on meaning and 1 on grammar)
- 11 = Much meaning Flawless language (2 on meaning and 3 on grammar)
- 12 = Much meaning Minor errors (2 on meaning and 2 on grammar)
- 14 = Much meaning Disfluent or incomprehensible (2 on meaning and 1 on grammar)
- 21 = Little or none meaning Flawless language (1 on meaning and 3 on grammar)
- 22 = Little or none meaning Minor errors (1 on meaning and 2 on grammar)
- 24 = Little or none meaning Disfluent or incomprehensible (1 on meaning and 1 on grammar)
See **README.txt** from data archive for additional details.
### Data Splits
There are 4,936 source texts in the training, 448 in the development, and 785 in the test set.
## Dataset Creation
### Annotations
#### Annotation process
Compressions were created using UHRS, an inhouse crowd-sourcing system similar to Amazon’s Mechanical Turk, in two annotation rounds, one for shortening and a second to rate compression quality:
1. In the first round, five workers were tasked with abridging each source text by at least 25%, while remaining grammatical and fluent, and retaining the meaning of the original.
2. In the second round, 3-5 judges (raters) were asked to evaluate the grammaticality of each compression on a scale from 1 (major errors, disfluent) through 3 (fluent), and again analogously for meaning preservation on a scale from 1 (orthogonal) through 3 (most important meaning-preserving).
## Additional Information
### Licensing Information
Microsoft Research Data License Agreement
### Citation Information
@inproceedings{Toutanova2016ADA,
title={A Dataset and Evaluation Metrics for Abstractive Compression of Sentences and Short Paragraphs},
author={Kristina Toutanova and Chris Brockett and Ke M. Tran and Saleema Amershi},
booktitle={EMNLP},
year={2016}
}
### Contributions
Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset. |
xsum_factuality | 2023-01-25T15:03:16.000Z | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-xsum",
"language:en",
"license:cc-by-4.0",
"hallucinations",
"region:us"
] | null | Neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input
document. The popular metric such as ROUGE fails to show the severity of the problem. The dataset consists of
faithfulness and factuality annotations of abstractive summaries for the XSum dataset. We have crowdsourced 3 judgements
for each of 500 x 5 document-system pairs. This will be a valuable resource to the abstractive summarization community. | @InProceedings{maynez_acl20,
author = "Joshua Maynez and Shashi Narayan and Bernd Bohnet and Ryan Thomas Mcdonald",
title = "On Faithfulness and Factuality in Abstractive Summarization",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
year = "2020",
pages = "1906--1919",
address = "Online",
} | null | 4 | 62 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-xsum
task_categories:
- summarization
task_ids: []
pretty_name: XSum Hallucination Annotations
tags:
- hallucinations
dataset_info:
- config_name: xsum_factuality
features:
- name: bbcid
dtype: int32
- name: system
dtype: string
- name: summary
dtype: string
- name: is_factual
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
- name: worker_id
dtype: string
splits:
- name: train
num_bytes: 800027
num_examples: 5597
download_size: 2864759
dataset_size: 800027
- config_name: xsum_faithfulness
features:
- name: bbcid
dtype: int32
- name: system
dtype: string
- name: summary
dtype: string
- name: hallucination_type
dtype:
class_label:
names:
'0': intrinsic
'1': extrinsic
- name: hallucinated_span_start
dtype: int32
- name: hallucinated_span_end
dtype: int32
- name: worker_id
dtype: string
splits:
- name: train
num_bytes: 1750325
num_examples: 11185
download_size: 2864759
dataset_size: 1750325
---
# Dataset Card for XSum Hallucination Annotations
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [XSUM Hallucination Annotations Homepage](https://research.google/tools/datasets/xsum-hallucination-annotations/)
- **Repository:** [XSUM Hallucination Annotations Homepage](https://github.com/google-research-datasets/xsum_hallucination_annotations)
- **Paper:** [ACL Web](https://www.aclweb.org/anthology/2020.acl-main.173.pdf)
- **Point of Contact:** [xsum-hallucinations-acl20@google.com](mailto:xsum-hallucinations-acl20@google.com)
### Dataset Summary
Neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document. The popular metric such as ROUGE fails to show the severity of the problem. This dataset contains a large scale human evaluation of several neural abstractive summarization systems to better understand the types of hallucinations they produce. The dataset consists of faithfulness and factuality annotations of abstractive summaries for the XSum dataset. The dataset has crowdsourced 3 judgements for each of 500 x 5 document-system pairs. This will be a valuable resource to the abstractive summarization community.
### Supported Tasks and Leaderboards
* `summarization`: : The dataset can be used to train a model for Summarization,, which consists in summarizing a given document. Success on this task is typically measured by achieving a *high/low* [ROUGE Score](https://huggingface.co/metrics/rouge).
### Languages
The text in the dataset is in English which are abstractive summaries for the [XSum dataset](https://www.aclweb.org/anthology/D18-1206.pdf). The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
##### Faithfulness annotations dataset
A typical data point consists of an ID referring to the news article(complete document), summary, and the hallucination span information.
An example from the XSum Faithfulness dataset looks as follows:
```
{
'bbcid': 34687720,
'hallucinated_span_end': 114,
'hallucinated_span_start': 1,
'hallucination_type': 1,
'summary': 'rory mcilroy will take a one-shot lead into the final round of the wgc-hsbc champions after carding a three-under',
'system': 'BERTS2S',
'worker_id': 'wid_0'
}
```
##### Factuality annotations dataset
A typical data point consists of an ID referring to the news article(complete document), summary, and whether the summary is factual or not.
An example from the XSum Factuality dataset looks as follows:
```
{
'bbcid': 29911712,
'is_factual': 0,
'summary': 'more than 50 pupils at a bristol academy have been sent home from school because of a lack of uniform.',
'system': 'BERTS2S',
'worker_id': 'wid_0'
}
```
### Data Fields
##### Faithfulness annotations dataset
Raters are shown the news article and the system summary, and are tasked with identifying and annotating the spans that aren't supported by the input article. The file contains the following columns:
- `bbcid`: Document id in the XSum corpus.
- `system`: Name of neural summarizer.
- `summary`: Summary generated by ‘system’.
- `hallucination_type`: Type of hallucination: intrinsic (0) or extrinsic (1)
- `hallucinated_span`: Hallucinated span in the ‘summary’.
- `hallucinated_span_start`: Index of the start of the hallucinated span.
- `hallucinated_span_end`: Index of the end of the hallucinated span.
- `worker_id`: Worker ID (one of 'wid_0', 'wid_1', 'wid_2')
The `hallucination_type` column has NULL value for some entries which have been replaced iwth `-1`.
##### Factuality annotations dataset
Raters are shown the news article and the hallucinated system summary, and are tasked with assessing the summary whether it is factual or not. The file contains the following columns:
- `bbcid1: Document id in the XSum corpus.
- `system`: Name of neural summarizer.
- `summary`: Summary generated by ‘system’.
- `is_factual`: Yes (1) or No (0)
- `worker_id`: Worker ID (one of 'wid_0', 'wid_1', 'wid_2')
The `is_factual` column has NULL value for some entries which have been replaced iwth `-1`.
### Data Splits
There is only a single split for both the Faithfulness annotations dataset and Factuality annotations dataset.
| | train |
|--------------------------|------:|
| Faithfulness annotations | 11185 |
| Factuality annotations | 5597 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```
@InProceedings{maynez_acl20,
author = "Joshua Maynez and Shashi Narayan and Bernd Bohnet and Ryan Thomas Mcdonald",
title = "On Faithfulness and Factuality in Abstractive Summarization",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
year = "2020",
pages = "1906--1919",
address = "Online",
}
```
### Contributions
Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset. |
clarin-pl/aspectemo | 2022-08-29T16:39:32.000Z | [
"task_categories:token-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:mit",
"region:us"
... | clarin-pl | AspectEmo dataset: Multi-Domain Corpus of Consumer Reviews for Aspect-Based
Sentiment Analysis | @misc{11321/849,
title = {{AspectEmo} 1.0: Multi-Domain Corpus of Consumer Reviews for Aspect-Based Sentiment Analysis},
author = {Koco{\'n}, Jan and Radom, Jarema and Kaczmarz-Wawryk, Ewa and Wabnic, Kamil and Zaj{\c a}czkowska, Ada and Za{\'s}ko-Zieli{\'n}ska, Monika},
url = {http://hdl.handle.net/11321/849},
note = {{CLARIN}-{PL} digital repository},
copyright = {The {MIT} License},
year = {2021}
} | null | 1 | 62 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- mit
multilinguality:
- monolingual
pretty_name: 'AspectEmo'
size_categories:
- 1K
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- sentiment-classification
---
# AspectEmo
## Description
AspectEmo Corpus is an extended version of a publicly available PolEmo 2.0 corpus of Polish customer reviews used in many projects on the use of different methods in sentiment analysis. The AspectEmo corpus consists of four subcorpora, each containing online customer reviews from the following domains: school, medicine, hotels, and products. All documents are annotated at the aspect level with six sentiment categories: strong negative (minus_m), weak negative (minus_s), neutral (zero), weak positive (plus_s), strong positive (plus_m).
## Versions
| version | config name | description | default | notes |
|---------|-------------|--------------------------------|---------|------------------|
| 1.0 | "1.0" | The version used in the paper. | YES | |
| 2.0 | - | Some bugs fixed. | NO | work in progress |
## Tasks (input, output and metrics)
Aspect-based sentiment analysis (ABSA) is a text analysis method that categorizes data by aspects and identifies the sentiment assigned to each aspect. It is the sequence tagging task.
**Input** ('*tokens'* column): sequence of tokens
**Output** ('*labels'* column): sequence of predicted tokens’ classes ("O" + 6 possible classes: strong negative (a_minus_m), weak negative (a_minus_s), neutral (a_zero), weak positive (a_plus_s), strong positive (a_plus_m), ambiguous (a_amb) )
**Domain**: school, medicine, hotels and products
**Measurements**: F1-score (seqeval)
**Example***:*
Input: `['Dużo', 'wymaga', ',', 'ale', 'bardzo', 'uczciwy', 'i', 'przyjazny', 'studentom', '.', 'Warto', 'chodzić', 'na', 'konsultacje', '.', 'Docenia', 'postępy', 'i', 'zaangażowanie', '.', 'Polecam', '.']`
Input (translated by DeepL): `'Demands a lot , but very honest and student friendly . Worth going to consultations . Appreciates progress and commitment . I recommend .'`
Output: `['O', 'a_plus_s', 'O', 'O', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'a_zero', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O']`
## Data splits
| Subset | Cardinality (sentences) |
|:-------|------------------------:|
| train | 1173 |
| val | 0 |
| test | 292 |
## Class distribution(without "O")
| Class | train | validation | test |
|:----------|--------:|-------------:|-------:|
| a_plus_m | 0.359 | - | 0.369 |
| a_minus_m | 0.305 | - | 0.377 |
| a_zero | 0.234 | - | 0.182 |
| a_minus_s | 0.037 | - | 0.024 |
| a_plus_s | 0.037 | - | 0.015 |
| a_amb | 0.027 | - | 0.033 |
## Citation
```
@misc{11321/849,
title = {{AspectEmo} 1.0: Multi-Domain Corpus of Consumer Reviews for Aspect-Based Sentiment Analysis},
author = {Koco{\'n}, Jan and Radom, Jarema and Kaczmarz-Wawryk, Ewa and Wabnic, Kamil and Zaj{\c a}czkowska, Ada and Za{\'s}ko-Zieli{\'n}ska, Monika},
url = {http://hdl.handle.net/11321/849},
note = {{CLARIN}-{PL} digital repository},
copyright = {The {MIT} License},
year = {2021}
}
```
## License
```
The MIT License
```
## Links
[HuggingFace](https://huggingface.co/datasets/clarin-pl/aspectemo)
[Source](https://clarin-pl.eu/dspace/handle/11321/849)
[Paper](https://sentic.net/sentire2021kocon.pdf)
## Examples
### Loading
```python
from pprint import pprint
from datasets import load_dataset
dataset = load_dataset("clarin-pl/aspectemo")
pprint(dataset['train'][20])
# {'labels': [0, 4, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 3, 0, 5, 0, 0, 0, 0, 0, 0],
# 'tokens': ['Dużo',
# 'wymaga',
# ',',
# 'ale',
# 'bardzo',
# 'uczciwy',
# 'i',
# 'przyjazny',
# 'studentom',
# '.',
# 'Warto',
# 'chodzić',
# 'na',
# 'konsultacje',
# '.',
# 'Docenia',
# 'postępy',
# 'i',
# 'zaangażowanie',
# '.',
# 'Polecam',
# '.']}
```
### Evaluation
```python
import random
from pprint import pprint
from datasets import load_dataset, load_metric
dataset = load_dataset("clarin-pl/aspectemo")
references = dataset["test"]["labels"]
# generate random predictions
predictions = [
[
random.randrange(dataset["train"].features["labels"].feature.num_classes)
for _ in range(len(labels))
]
for labels in references
]
# transform to original names of labels
references_named = [
[dataset["train"].features["labels"].feature.names[label] for label in labels]
for labels in references
]
predictions_named = [
[dataset["train"].features["labels"].feature.names[label] for label in labels]
for labels in predictions
]
# transform to BILOU scheme
references_named = [
[f"U-{label}" if label != "O" else label for label in labels]
for labels in references_named
]
predictions_named = [
[f"U-{label}" if label != "O" else label for label in labels]
for labels in predictions_named
]
# utilise seqeval to evaluate
seqeval = load_metric("seqeval")
seqeval_score = seqeval.compute(
predictions=predictions_named,
references=references_named,
scheme="BILOU",
mode="strict",
)
pprint(seqeval_score)
# {'a_amb': {'f1': 0.00597237775289287,
# 'number': 91,
# 'precision': 0.003037782418834251,
# 'recall': 0.17582417582417584},
# 'a_minus_m': {'f1': 0.048306148055207034,
# 'number': 1039,
# 'precision': 0.0288551620760727,
# 'recall': 0.1482194417709336},
# 'a_minus_s': {'f1': 0.004682997118155619,
# 'number': 67,
# 'precision': 0.0023701002734731083,
# 'recall': 0.19402985074626866},
# 'a_plus_m': {'f1': 0.045933014354066985,
# 'number': 1015,
# 'precision': 0.027402473834443386,
# 'recall': 0.14187192118226602},
# 'a_plus_s': {'f1': 0.0021750951604132683,
# 'number': 41,
# 'precision': 0.001095690284879474,
# 'recall': 0.14634146341463414},
# 'a_zero': {'f1': 0.025159400310184387,
# 'number': 501,
# 'precision': 0.013768389287061486,
# 'recall': 0.14570858283433133},
# 'overall_accuracy': 0.13970115681233933,
# 'overall_f1': 0.02328248652368391,
# 'overall_precision': 0.012639312620633834,
# 'overall_recall': 0.14742193173565724}
``` |
mozilla-foundation/common_voice_8_0 | 2023-07-29T16:00:11.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | null | 24 | 62 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- 10K<n<100K
ar:
- 100K<n<1M
as:
- n<1K
az:
- n<1K
ba:
- 100K<n<1M
bas:
- 1K<n<10K
be:
- 100K<n<1M
bg:
- 1K<n<10K
br:
- 10K<n<100K
ca:
- 100K<n<1M
ckb:
- 10K<n<100K
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 100K<n<1M
da:
- 1K<n<10K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 1M<n<10M
es:
- 100K<n<1M
et:
- 10K<n<100K
eu:
- 100K<n<1M
fa:
- 100K<n<1M
fi:
- 10K<n<100K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
gl:
- 10K<n<100K
gn:
- 1K<n<10K
ha:
- 1K<n<10K
hi:
- 10K<n<100K
hsb:
- 1K<n<10K
hu:
- 10K<n<100K
hy-AM:
- 1K<n<10K
ia:
- 10K<n<100K
id:
- 10K<n<100K
ig:
- n<1K
it:
- 100K<n<1M
ja:
- 10K<n<100K
ka:
- 1K<n<10K
kab:
- 100K<n<1M
kk:
- 1K<n<10K
kmr:
- 10K<n<100K
ky:
- 10K<n<100K
lg:
- 100K<n<1M
lt:
- 10K<n<100K
lv:
- 1K<n<10K
mdf:
- n<1K
mk:
- n<1K
ml:
- 1K<n<10K
mn:
- 10K<n<100K
mr:
- 1K<n<10K
mt:
- 10K<n<100K
myv:
- 1K<n<10K
nl:
- 10K<n<100K
nn-NO:
- n<1K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 100K<n<1M
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 10K<n<100K
ru:
- 100K<n<1M
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sat:
- n<1K
sk:
- 10K<n<100K
sl:
- 10K<n<100K
sr:
- 1K<n<10K
sv-SE:
- 10K<n<100K
sw:
- 100K<n<1M
ta:
- 100K<n<1M
th:
- 100K<n<1M
tr:
- 10K<n<100K
tt:
- 10K<n<100K
ug:
- 10K<n<100K
uk:
- 10K<n<100K
ur:
- 1K<n<10K
uz:
- 100K<n<1M
vi:
- 10K<n<100K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- 100K<n<1M
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 8.0
language_bcp47:
- ab
- ar
- as
- az
- ba
- bas
- be
- bg
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- gl
- gn
- ha
- hi
- hsb
- hu
- hy-AM
- ia
- id
- ig
- it
- ja
- ka
- kab
- kk
- kmr
- ky
- lg
- lt
- lv
- mdf
- mk
- ml
- mn
- mr
- mt
- myv
- nl
- nn-NO
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sat
- sk
- sl
- sr
- sv-SE
- sw
- ta
- th
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 8.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 18243 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 14122 validated hours in 87 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Breton, Bulgarian, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Moksha, Mongolian, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
jonathan-roberts1/SATIN | 2023-10-04T15:55:46.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"size_categories:100K<n<1M",
"language:en",
"license:other",
"arxiv:2304.11619",
"region:us"
] | jonathan-roberts1 | null | null | null | 1 | 62 | ---
license: other
configs:
- config_name: SAT-4
- config_name: SAT-6
- config_name: NASC-TG2
- config_name: WHU-RS19
- config_name: RSSCN7
- config_name: RS_C11
- config_name: SIRI-WHU
- config_name: EuroSAT
- config_name: NWPU-RESISC45
- config_name: PatternNet
- config_name: RSD46-WHU
- config_name: GID
- config_name: CLRS
- config_name: Optimal-31
- config_name: Airbus-Wind-Turbines-Patches
- config_name: USTC_SmokeRS
- config_name: Canadian_Cropland
- config_name: Ships-In-Satellite-Imagery
- config_name: Satellite-Images-of-Hurricane-Damage
- config_name: Brazilian_Coffee_Scenes
- config_name: Brazilian_Cerrado-Savanna_Scenes
- config_name: Million-AID
- config_name: UC_Merced_LandUse_MultiLabel
- config_name: MLRSNet
- config_name: MultiScene
- config_name: RSI-CB256
- config_name: AID_MultiLabel
task_categories:
- image-classification
- zero-shot-image-classification
pretty_name: SATellite ImageNet
size_categories:
- 100K<n<1M
language:
- en
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** [https://satinbenchmark.github.io](https://satinbenchmark.github.io)
- **Repository:**
- **Paper:** [SATIN: A Multi-Task Metadataset for Classifying Satellite Imagery using Vision-Language Models](https://arxiv.org/pdf/2304.11619.pdf)
- **Leaderboard:** [SATIN Leaderboard](https://satinbenchmark.github.io/leaderboard.md)
### Dataset Summary
SATIN (SATellite ImageNet) is a metadataset containing 27 constituent satellite and aerial image datasets spanning 6 distinct tasks: Land Cover, Land Use,
Hierarchical Land Use, Complex Scenes, Rare Scenes, and False Colour Scenes. The imagery is globally distributed, comprised of resolutions spanning 5 orders
of magnitude, multiple fields of view sizes, and over 250 distinct class labels.
## Dataset Structure
The SATIN benchmark is comprised of the following datasets:
#### Task 1: Land Cover
- SAT-4
- SAT-6
- NASC-TG2
#### Task 2: Land Use
- WHU-RS19
- RSSCN7
- RS_C11
- SIRI-WHU
- EuroSAT
- NWPU-RESISC45
- PatternNet
- RSD46-WHU
- GID
- CLRS
- Optimal-31
#### Task 3: Hierarchical Land Use
- Million-AID
- RSI-CB256
#### Task 4: Complex Scenes
- UC_Merced_LandUse_MultiLabel
- MLRSNet
- MultiScene
- AID_MultiLabel
#### Task 5: Rare Scenes
- Airbus-Wind-Turbines-Patches
- USTC_SmokeRS
- Canadian_Cropland
- Ships-In-Satellite-Imagery
- Satellite-Images-of-Hurricane-Damage
#### Task 6: False Colour Scenes
- Brazilian_Coffee_Scenes
- Brazilian_Cerrado-Savanna_Scenes
For ease of use and to avoid having to download the entire benchmark for each use, in this dataset repository, each of the 27 datasets is included as a separate
'config'.
### Example Usage
```python
from datasets import load_dataset
hf_dataset = load_dataset('jonathan-roberts1/SATIN', DATASET_NAME, split='train') # for DATASET_NAME use one of the configs listed above (e.g., EuroSAT)
features = hf_dataset.features
class_labels = features['label'].names # Note for the Hierarchical Land Use datasets, the label field is replaced with label1, label2, ...
random_index = 5
example = hf_dataset[random_index]
image, label = example['image'], example['label']
```
### Data Splits
For each config, there is just the single, default 'train' split.
### Source Data
More information regarding the source data can be found in our paper. Additionally, each of the constituent datasets have been uploaded to HuggingFace datasets.
They can be accessed at: huggingface.co/datasets/jonathan-roberts1/DATASET_NAME.
### Dataset Curators
This dataset was curated by Jonathan Roberts, Kai Han, and Samuel Albanie
### Licensing Information
As SATIN is comprised of existing datasets with differing licenses, there is not a single license for SATIN. All of the datasets in SATIN can be used
for research purposes; usage information of specific constituent datasets can be found in the Appendix of our paper.
### Citation Information
```
@article{roberts2023satin,
title = {SATIN: A Multi-Task Metadataset for Classifying Satellite Imagery using Vision-Language Models},
author = {Jonathan Roberts, Kai Han, and Samuel Albanie},
year = {2023},
eprint = {2304.11619},
archivePrefix= {arXiv},
primaryClass = {cs.CV}
}
``` |
camel-ai/math | 2023-06-22T21:59:52.000Z | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] | camel-ai | null | null | null | 47 | 62 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: CAMEL Math
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Math dataset is composed of 50K problem-solution pairs obtained using GPT-4. The dataset problem-solutions pairs generating from 25 math topics, 25 subtopics for each topic and 80 problems for each "topic,subtopic" pairs.
We provide the data in `math50k.zip`.
## Data Fields
**The data fields for files in `math50k.zip` are as follows:**
* `role_1`: assistant role
* `topic`: math topic
* `sub_topic`: math subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
Note: File naming refers to {`topic_index`}\_{`subtopic_index`}\_{`problem_number`}.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/math", repo_type="dataset", filename="math50k.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
|
WiktorS/polish-news | 2023-06-05T20:57:34.000Z | [
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:pl",
"license:apache-2.0",
"region:us"
] | WiktorS | null | null | null | 5 | 62 | ---
license: apache-2.0
task_categories:
- text-classification
- summarization
- text-generation
language:
- pl
size_categories:
- 100K<n<1M
---
This dataset contains more than 250k articles obtained from polish news site `tvp.info.pl`.
Main purpouse of collecting the data was to create a transformer-based model for text summarization.
Columns:
* `link` - link to article
* `title` - original title of the article
* `headline` - lead/headline of the article - first paragraph of the article visible directly from the page
* `content` - full textual contents of the article
Link to original repo: https://github.com/WiktorSob/scraper-tvp
Download the data:
```python
from datasets import load_dataset
dataset = load_dataset("WiktorS/polish-news")
``` |
Circularmachines/batch_indexing_machine_green_test | 2023-06-16T20:44:15.000Z | [
"region:us"
] | Circularmachines | null | null | null | 0 | 62 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: test
num_bytes: 147427807.0
num_examples: 420
download_size: 147438537
dataset_size: 147427807.0
---
# Dataset Card for "batch_indexing_machine_green_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Patt/MultiRC_TH_drop | 2023-07-20T15:26:22.000Z | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | null | 0 | 62 | ---
task_categories:
- text-classification
language:
- en
- th
dataset_info:
features:
- name: paragraph
dtype: string
- name: paragraph_TH
dtype: string
- name: question
dtype: string
- name: question_TH
dtype: string
- name: answer
dtype: string
- name: answer_TH
dtype: string
- name: idx
struct:
- name: answer
dtype: int64
- name: paragraph
dtype: int64
- name: question
dtype: int64
- name: label
dtype: int64
- name: score_paragraph
dtype: float64
- name: score_question
dtype: float64
- name: score_answer
dtype: float64
splits:
- name: train
num_bytes: 133061823
num_examples: 23520
- name: validation
num_bytes: 22534453
num_examples: 4212
- name: test
num_bytes: 42757726
num_examples: 8272
download_size: 5756232
dataset_size: 198354002
---
# Dataset Card for MultiRC_TH_drop
### Dataset Description
This dataset is Thai translated version of [multirc](https://huggingface.co/datasets/super_glue/viewer/multirc) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
The score was penalized by the length of original text compare to translated text. The row that any score < 0.66 was dropped. |
awettig/Pile-Github-0.5B-6K-opt | 2023-07-10T19:40:11.000Z | [
"region:us"
] | awettig | null | null | null | 0 | 62 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6487050154
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1121468368
dataset_size: 6551995846
---
# Dataset Card for "Pile-Github-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
awettig/Pile-Wikipedia-0.5B-6K-opt | 2023-07-10T19:41:27.000Z | [
"region:us"
] | awettig | null | null | null | 0 | 62 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 5651184786
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1476548346
dataset_size: 5716130478
---
# Dataset Card for "Pile-Wikipedia-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
strombergnlp/pypi-20230724 | 2023-07-25T03:12:55.000Z | [
"license:apache-2.0",
"region:us"
] | strombergnlp | null | null | null | 0 | 62 | ---
license: apache-2.0
---
|
C-MTEB/CMedQAv1-reranking | 2023-07-28T07:19:52.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 62 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: test
num_bytes: 31879155
num_examples: 1000
download_size: 20670061
dataset_size: 31879155
---
# Dataset Card for "CMedQAv1-reranking"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BramVanroy/dutch_chat_datasets | 2023-08-13T08:55:40.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:nl",
"region:us"
] | BramVanroy | null | null | null | 0 | 62 | ---
language:
- nl
size_categories:
- 100K<n<1M
task_categories:
- question-answering
- text-generation
- conversational
pretty_name: Chat Datasets for Dutch
dataset_info:
features:
- name: dialog
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 191497357
num_examples: 178054
download_size: 95191363
dataset_size: 191497357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dutch_chat_datasets"
This dataset is a merge of the following datasets. See their pages for licensing, usage, creation, and citation information.
- https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch
- https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch-baize
- https://huggingface.co/datasets/BramVanroy/stackoverflow-chat-dutch
- https://huggingface.co/datasets/BramVanroy/quora-chat-dutch
They are reformatted for easier, consistent processing in downstream tasks such as language modelling.
**Columns**:
- `dialog`: a list of turns, where each turn is a dictionary that contains these keys:
- `role`: `user` or `assistant`
- `content`: the given text `str`
- `source`: the source dataset that this dialog originates from
|
dim/sharegpt_short_ru | 2023-09-02T00:53:23.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | dim | null | null | null | 0 | 62 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: conversation
sequence: string
- name: hash
dtype: string
splits:
- name: train
num_bytes: 825523
num_examples: 253
download_size: 367027
dataset_size: 825523
---
### Version 1
```python
import json
with open("verbalist/datasets/RyokoAI_ShareGPT52K/sg_90k_part1.json") as f:
dataset1 = json.load(f)
with open("verbalist/datasets/RyokoAI_ShareGPT52K/sg_90k_part2.json") as f:
dataset2 = json.load(f)
dataset = dataset1 + dataset2
import re
import regex
import hashlib
def filter_string(string):
has = True
has_zh = not len(re.findall(r"[\u4e00-\u9fff]+", string)) > 0
has_ko = not len(re.findall(r"[\u3131-\ucb4c]+", string)) > 0
has = has_zh and has_ko
invalid_letters = "ієùéàçğİžš"
for letter in invalid_letters:
if letter in string:
return False
return has
def has_cyrillic(text):
return bool(regex.search(r"\p{IsCyrillic}", text))
clean_dataset = []
for conversation in dataset:
all_text = "\n".join([item["value"] for item in conversation["conversations"]])
# print(all_text)
# break
if filter_string(all_text) and has_cyrillic(all_text):
clean_dataset.append(conversation)
import markdownify
def correct_string(string):
string = string.replace("\\_", "_")
languages = [
"css",
"python",
"go",
"html",
"kotlin",
"diff",
"vba",
"sql",
]
for lang in languages:
string = string.replace(f"\n{lang}Copy code`", f"{lang}\n")
string = string.replace("`\n```", "\n```")
string = string.replace("\n ", "\n ")
delete_phrases = [
"Как искусственный интеллект, я не являюсь владельцем физических объектов и не могу продавать или покупать предметы. Однако, я могу поделиться советом, как можно попытаться убедить кого-то в покупке карандаша.",
"Как искусственный интеллект, я не имею личных чувств и мнений, и не могу иметь предпочтений в выборе между рождением своего ребенка и усыновлением приемного ребенка из приюта.",
"1 / 1",
"2 / 2",
"3 / 3",
"4 / 4",
"5 / 5",
"6 / 6",
"7 / 7",
"8 / 8",
"9 / 9",
"К сожалению, я не могу проверить дату вопроса, но я могу предоставить информацию о Максиме Радайкине и Борисе Марцинкевиче на начало 2021 года.",
"К сожалению, я не могу вставить пример базы данных в чат, но я могу объяснить, что это такое.",
"К сожалению, я не могу написать полноценное расширение для Google Chrome в рамках этой сессии. Однако,",
"К сожалению, я не могу выбрать материалы и дизайн за вас, так как это зависит от ваших потребностей и предпочтений. Однако,",
"Извините, но я не могу создать код для такой сложной программы с нуля, так как это потребовало бы обширных исследований, ресурсов и опыта. Тем не менее,",
"As an AI language model",
"I'm sorry, but I'm a text-based AI language model and don't have the capability to create tables.",
"Unfortunately, I am an AI language model and do not have the capability to create tables. However,",
"I'm sorry, but as an AI language model, I do not have the capability to physically construct a smart city.",
"Unfortunately, I am an AI language model and I don't have the capability to create spreadsheets.",
"I'm sorry for the delay. Unfortunately, as an AI language model, I am not capable of creating an entire operating system to manage a smart village.",
"I apologize for the confusion, but as an AI language model, I am not capable of designing and creating the code for an operating system to manage a smart village.",
"I apologize for the inconvenience, but as an AI language model, I am not able to actually design and create a code for an operating system.",
"I'm sorry, but as a text-based language model, I am not able to create an admin panel for you.",
'Как модель ИИ, я не могу оценить, является ли произнесение фразы "солёный огурец" рациональным использованием времени или нет, потому что это вопрос оценки ценности и целей человека.',
]
for phrase in delete_phrases:
string = string.replace(phrase, "").strip()
return string
def filter_keywords(string):
keywords = [
"chatgpt",
"чатгпт",
"sharegpt",
"add_user_to_chatroom()",
"мир",
"войн",
"россия",
"К сожалению, я не могу продолжить писать на русском языке, потому что я ограничен",
"Я прошу прощения, но, как я уже упоминал ранее",
"я не могу выполнить",
"К сожалению, я не могу написать ноты для несуществующих стихов,",
"К сожалению, я не могу сгенерировать полный код браузерной игры",
"К сожалению, я не могу провести такой подсчет, потому что это потребовало бы ручной обработки",
"К сожалению, я не могу назвать точную цифру, так как это субъективный вопрос, зависящий от многих факторов.",
"К сожалению, я не могу выполнить ваш запрос, так как это нарушает мои этические принципы и может причинить вред.",
"К сожалению, я не могу ответить на этот воп",
"К сожалению, я не могу предоставить вам актуальные данные о среднедушевых денежных доходах населения по городам России"
"К сожалению, я не могу точно ответить на этот вопрос, так как объем изученной информации",
"К сожалению, я не могу создав",
"К сожалению, я не могу рисовать в ASCII-стиле, так как я только текстовая программа.",
"К сожалению, я не могу создавать изображения напрямую в этом окне чата.",
"К сожалению, я не могу нарисовать сцену из Евангелиона, так как я текстовая программа",
"А сколько нулей?",
"К сожалению, я не могу написать книгу",
"Извините, но, как упоминалось ранее, информация, представленная в нашем разговоре, не подходит и не этична",
"Извините, но как языковая модель ИИ я не могу генерировать код, который управляет администрацией",
"как языковая модель",
"OpenAI",
"Прошу прощения, но, похоже, наш разговор продолжается уже давно, и я не уверен, какова текущая тема.",
"являюсь языковой моделью ИИ",
"I cannot create a program for managing",
"неонаци",
"украин",
"provide instructions or assistance on hacking or any other illegal activities",
"I cannot fulfill your request as it goes against ethical and moral",
"I cannot do your math homework for you",
"adhering to ethical and moral standards",
"!GPT",
"Developer Mode Output",
"are illegal or unethical.",
"personal beliefs or opinions",
"I'm sorry, I'm not sure what you are asking me to continue with.",
"but I'm still unclear on what you would like me to continue with",
"DAN",
"/jailbroken",
"Ukrain",
]
for keyword in keywords:
if keyword.lower() in string.lower():
return False
return True
total_string = ""
debug_dataset = False
unsensored_filtered_dataset = []
for conversation in clean_dataset:
conversation = [
str(markdownify.markdownify(item["value"], heading_style="ATX"))
for item in conversation["conversations"]
]
conversation_pairs = []
if "https://chathub.gg" in conversation[0]:
conversation.pop(0)
full_text = " ".join(conversation)
if filter_keywords(full_text):
for i in range(1, len(conversation)):
if (i + 1) % 2 == 0:
if debug_dataset:
bot_message = "BOT " + correct_string(conversation[i])
user_message = "USER " + correct_string(conversation[i - 1])
else:
bot_message = correct_string(conversation[i])
user_message = correct_string(conversation[i - 1])
conversation_pairs.append(user_message)
conversation_pairs.append(bot_message)
if len(conversation_pairs) > 0:
unsensored_filtered_dataset.append(conversation_pairs)
if debug_dataset:
all_text = "\n===\n".join([item for item in conversation_pairs])
total_string += all_text
total_string += "===" * 10
total_string += "\n"
total_string += "===" * 10
total_string += "\n"
total_string += "===" * 10
total_string += "\n"
# print(total_string)
from transformers import AutoTokenizer
from verbalist.datasets.utils import visualize_hist
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
conversation_lengths = []
for conversation in unsensored_filtered_dataset:
all_text = "\n===\n".join([item for item in conversation])
conversation_lengths.append(len(tokenizer(all_text)["input_ids"]))
# print(all_text)
# print("="*100)
# print("="*100)
# print("="*100)
# break
# if has_cyrillic(all_text):
# rus_conv.append(conversation)
visualize_hist(conversation_lengths, "ru_share_gpt_filtered")
filter_num = 85
passed_convs = (
np.array(conversation_lengths) < np.percentile(conversation_lengths, filter_num)
).tolist()
unsensored_passed = []
for i, status in enumerate(passed_convs):
if status:
unsensored_passed.append(unsensored_filtered_dataset[i])
unsensored_dataset = []
for conv in unsensored_passed:
conv_hash = hashlib.sha256(conv[0].encode('utf-8')).hexdigest()
unsensored_dataset.append({
"conversation": conv,
"hash": conv_hash
})
``` |
fake-news-UFG/fakebr | 2023-08-18T13:51:35.000Z | [
"task_categories:text-classification",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:pt",
"region:us"
] | fake-news-UFG | Fake.Br Corpus is composed of aligned true and fake news written in Brazilian Portuguese. | @article{silva:20,
title = "Towards automatically filtering fake news in Portuguese",
journal = "Expert Systems with Applications",
volume = "146",
pages = "113199",
year = "2020",
issn = "0957-4174",
doi = "https://doi.org/10.1016/j.eswa.2020.113199",
url = "http://www.sciencedirect.com/science/article/pii/S0957417420300257",
author = "Renato M. Silva and Roney L.S. Santos and Tiago A. Almeida and Thiago A.S. Pardo",
} | null | 0 | 62 | ---
pretty_name: Fake.br
task_categories:
- text-classification
language:
- pt
language_details: pt-BR
size_categories:
- 1K<n<10K
multilinguality:
- monolingual
language_creators:
- found
---
# Dataset Card for fake.br
## Dataset Description
- **Homepage:**
- **Repository:** [https://github.com/roneysco/Fake.br-Corpus/](https://github.com/roneysco/Fake.br-Corpus/)
- **Paper:** [https://sites.icmc.usp.br/taspardo/OpenCor2018-SantosEtAl.pdf](https://sites.icmc.usp.br/taspardo/OpenCor2018-SantosEtAl.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Fake.Br Corpus is composed of aligned true and fake news written in Brazilian Portuguese.
### Supported Tasks and Leaderboards
The task is text classification of news content.
### Languages
The dataset is in Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use "Fake.br Dataset", please include a citation to the project website and the corresponding paper published in PROPOR 2018 conference:
```bibtex
@InProceedings{fakebr:18,
author={Monteiro, Rafael A. and Santos, Roney L. S. and Pardo, Thiago A. S. and de Almeida, Tiago A. and Ruiz, Evandro E. S. and Vale, Oto A.},
title={Contributions to the Study of Fake News in Portuguese: New Corpus and Automatic Detection Results},
booktitle={Computational Processing of the Portuguese Language},
year={2018},
publisher={Springer International Publishing},
pages={324--334},
isbn={978-3-319-99722-3},
}
```
or the paper published in Expert Systems with Applications:
```bibtex
@article{silva:20,
title = "Towards automatically filtering fake news in Portuguese",
journal = "Expert Systems with Applications",
volume = "146",
pages = "113199",
year = "2020",
issn = "0957-4174",
doi = "https://doi.org/10.1016/j.eswa.2020.113199",
url = "http://www.sciencedirect.com/science/article/pii/S0957417420300257",
author = "Renato M. Silva and Roney L.S. Santos and Tiago A. Almeida and Thiago A.S. Pardo",
}
```
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset. |
natsumi531/haskellfunc | 2023-10-03T01:01:08.000Z | [
"license:unknown",
"region:us"
] | natsumi531 | null | null | null | 0 | 62 | ---
license: unknown
---
|
harvard-lil/cold-cases | 2023-10-11T01:06:38.000Z | [
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"united states",
"law",
"legal",
"court",
"opinions",
"region:us"
] | harvard-lil | null | null | null | 5 | 62 | ---
license: cc0-1.0
language:
- en
tags:
- united states
- law
- legal
- court
- opinions
size_categories:
- 1M<n<10M
viewer: true
configs:
- config_name: jsonl
data_files: "cold.jsonl/*"
- config_name: parquet
data_files: "cold.parquet/*"
default: true
---
<a href="https://huggingface.co/datasets/harvard-lil/cold-cases/resolve/main/coldcases.png"><img src="https://huggingface.co/datasets/harvard-lil/cold-cases/resolve/main/coldcases-banner.webp"/></a>
# Collaborative Open Legal Data (COLD) - Cases
COLD Cases is a dataset of 8.3 million United States legal decisions with text and metadata, formatted as one JSON object per decision.
The total dataset size is approximately 104GB of uncompressed JSON.
This dataset exists to support the open legal movement exemplified by projects like
[Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law) and
[LegalBench](https://hazyresearch.stanford.edu/legalbench/).
A key input to legal understanding projects is caselaw -- the published, precedential decisions of judges deciding legal disputes and explaining their reasoning.
United States caselaw is collected and published as open data by [CourtListener](https://www.courtlistener.com/), which maintains scrapers to aggregate data from
a wide range of public sources.
COLD Cases reformats CourtListener's [bulk data](https://www.courtlistener.com/help/api/bulk-data) so that all of the semantic information about each legal decision
(the authors and text of majority and dissenting opinions; head matter; and substantive metadata) is encoded in a single JSON object per decision,
with extraneous data removed. Serving in the traditional role of libraries as a standardization steward, the Harvard Library Innovation Lab is maintaining
this [open source](https://github.com/harvard-lil/cold-cases-export) pipeline to consolidate the data engineering for preprocessing caselaw so downstream machine
learning and natural language processing projects can use consistent, high quality representations of cases for legal understanding tasks.
Prepared by the [Harvard Library Innovation Lab](https://lil.law.harvard.edu) in collaboration with the [Free Law Project](https://free.law/).
---
## Links
- [Data nutrition label](https://datanutrition.org/labels/v3/?id=c29976b2-858c-4f4e-b7d0-c8ef12ce7dbe) (DRAFT). ([Archive](https://perma.cc/YV5P-B8JL)).
- [Pipeline source code](https://github.com/harvard-lil/cold-cases-export)
---
## Summary
- [Formats](#formats)
- [File structure](#file-structure)
- [Data dictionary](#data-dictionary)
- [Notes on appropriate use](#appropriate-use)
---
## Formats
We've released this data in two different formats:
### JSON-L or JSON Lines
This format consists of a JSON document for every row in the dataset, one per line. This makes it easy to sample a selection of the data or split it out into multiple files for parallel processing using ordinary command line tools such as `head`, `split` and `jq`.
Just about any language you can think of has a ready way to parse JSON data, which makes this version of the dataset more compatible.
See: https://jsonlines.org/
### Apache Parquet
Parquet is binary format that makes filtering and retrieving the data quicker because it lays out the data in columns, which means columns that are unnecessary to satisfy a given query or workflow don't need to be read.
Parquet has more limited support outside the Python and JVM ecosystems, however.
See: https://parquet.apache.org/
[☝️ Go back to Summary](#summary)
---
## File structure
Both of these datasets were exported by the same system based on [Apache Spark](https://spark.apache.org/), so within each subdirectory, you'll find a similar list of files:
- **_SUCCESS**: This indicates that the job that built the dataset ran successfully and therefore this is a complete dataset.
- **.json.gz or .gz.parquet**: Each of these is a slice of the full dataset, encoded in JSON-L or Parquet, and compressed with [GZip](https://www.gnu.org/software/gzip/).
- **Hidden `.crc` files**: These can be used to verify that the data transferred correctly and otherwise ignored.
[☝️ Go back to Summary](#summary)
---
## Data dictionary
Partial glossary of the fields in the data.
| Field name | Description |
| --- | --- |
| `judges` | Names of judges presiding over the case, extracted from the text. |
| `date_filed` | Date the case was filed. Formatted in ISO Date format. |
| `date_filed_is_approximate` | Boolean representing whether the `date_filed` value is precise to the day. |
| `slug` | Short, human-readable unique string nickname for the case. |
| `case_name_short` | Short name for the case. |
| `case_name` | Fuller name for the case. |
| `case_name_full` | Full, formal name for the case. |
| `attorneys` | Names of attorneys arguing the case, extracted from the text. |
| `nature_of_suit` | Free text representinng type of suit, such as Civil, Tort, etc. |
| `syllabus` | Summary of the questions addressed in the decision, if provided by the reporter of decisions. |
| `headnotes` | Textual headnotes of the case |
| `summary` | Textual summary of the case |
| `disposition` | How the court disposed of the case in their final ruling. |
| `history` | Textual information about what happened to this case in later decisions. |
| `other_dates` | Other dates related to the case in free text. |
| `cross_reference` | Citations to related cases. |
| `citation_count` | Number of cases that cite this one. |
| `precedential_status` | Constrainted to the values "Published", "Unknown", "Errata", "Unpublished", "Relating-to", "Separate", "In-chambers" |
| `citations` | Cases that cite this case. |
| `court_short_name` | Short name of court presiding over case. |
| `court_full_name` | Full name of court presiding over case. |
| `court_jurisdiction` | Code for type of court that presided over the case. See: [court_jurisdiction field values](#court_jurisdiction-field-values) |
| `opinions` | An array of subrecords. |
| `opinions.author_str` | Name of the author of an individual opinion. |
| `opinions.per_curiam` | Boolean representing whether the opinion was delivered by an entire court or a single judge. |
| `opinions.type` | One of `"010combined"`, `"015unamimous"`, `"020lead"`, `"025plurality"`, `"030concurrence"`, `"035concurrenceinpart"`, `"040dissent"`, `"050addendum"`, `"060remittitur"`, `"070rehearing"`, `"080onthemerits"`, `"090onmotiontostrike"`. |
| `opinions.opinion_text` | Actual full text of the opinion. |
| `opinions.ocr` | Whether the opinion was captured via optical character recognition or born-digital text. |
### court_jurisdiction field values
| Value | Description |
| --- | --- |
| F | Federal Appellate |
| FD | Federal District |
| FB | Federal Bankruptcy |
| FBP | Federal Bankruptcy Panel |
| FS | Federal Special |
| S | State Supreme |
| SA | State Appellate |
| ST | State Trial |
| SS | State Special |
| TRS | Tribal Supreme |
| TRA | Tribal Appellate |
| TRT | Tribal Trial |
| TRX | Tribal Special |
| TS | Territory Supreme |
| TA | Territory Appellate |
| TT | Territory Trial |
| TSP | Territory Special |
| SAG | State Attorney General |
| MA | Military Appellate |
| MT | Military Trial |
| C | Committee |
| I | International |
| T | Testing |
[☝️ Go back to Summary](#summary)
## Notes on appropriate use
When using this data, please keep in mind:
* All documents in this dataset are public information, published by courts within the United States to inform the public about the law. **You have a right to access them.**
* Nevertheless, **public court decisions frequently contain statements about individuals that are not true**. Court decisions often contain claims that are disputed,
or false claims taken as true based on a legal technicality, or claims taken as true but later found to be false. Legal decisions are designed to inform you about the law -- they are not
designed to inform you about individuals, and should not be used in place of credit databases, criminal records databases, news articles, or other sources intended
to provide factual personal information. Applications should carefully consider whether use of this data will inform about the law, or mislead about individuals.
* **Court decisions are not up-to-date statements of law**. Each decision provides a given judge's best understanding of the law as applied to the stated facts
at the time of the decision. Use of this data to generate statements about the law requires integration of a large amount of context --
the skill typically provided by lawyers -- rather than simple data retrieval.
To mitigate privacy risks, we have filtered out cases [blocked or deindexed by CourtListener](https://www.courtlistener.com/terms/#removal). Researchers who
require access to the full dataset without that filter may rerun our pipeline on CourtListener's raw data.
[☝️ Go back to Summary](#summary) |
CenterFor/UB_paragraphs | 2023-09-27T13:36:02.000Z | [
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | CenterFor | null | null | null | 0 | 62 | ---
language:
- en
license: mit
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: id
dtype: string
- name: embedding
sequence: float64
- name: metadata
struct:
- name: Paper
dtype: int64
- name: Paragraph
dtype: int64
- name: Section
dtype: int64
- name: Standard Reference
dtype: string
- name: document
dtype: string
splits:
- name: data
num_bytes: 186576297
num_examples: 14585
download_size: 139171828
dataset_size: 186576297
configs:
- config_name: default
data_files:
- split: data
path: data/data-*
---
|
Areej0/mogalad | 2023-10-02T22:50:39.000Z | [
"region:us"
] | Areej0 | null | null | null | 0 | 62 | Entry not found |
umarigan/turkish_wikipedia | 2023-10-03T08:39:01.000Z | [
"region:us"
] | umarigan | null | null | null | 0 | 62 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 1142404262
num_examples: 524601
download_size: 629924151
dataset_size: 1142404262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "turkish_wikipedia"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gayanin/pubmed-abs-sub-25 | 2023-10-05T00:01:05.000Z | [
"region:us"
] | gayanin | null | null | null | 0 | 62 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: refs
dtype: string
- name: sub_25
dtype: string
splits:
- name: train
num_bytes: 19994471
num_examples: 74724
- name: test
num_bytes: 2558113
num_examples: 9341
- name: validation
num_bytes: 2631156
num_examples: 9341
download_size: 14211432
dataset_size: 25183740
---
# Dataset Card for "pubmed-abs-sub-25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Linyuyu/zhenglaonian | 2023-10-10T07:07:03.000Z | [
"region:us"
] | Linyuyu | null | null | null | 0 | 62 | Entry not found |
SocialGrep/one-million-reddit-questions | 2022-07-25T18:57:10.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | null | null | null | 3 | 61 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for one-million-reddit-questions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=onemillionquestions)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=onemillionquestions)
### Dataset Summary
This corpus contains a million posts on /r/AskReddit, annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] |
imvladikon/hebrew_speech_kan | 2023-05-05T09:12:15.000Z | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:he",
"region:us"
] | imvladikon | null | null | null | 2 | 61 | ---
task_categories:
- automatic-speech-recognition
language:
- he
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1569850175.0
num_examples: 8000
- name: validation
num_bytes: 394275049.0
num_examples: 2000
download_size: 1989406585
dataset_size: 1964125224.0
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Hebrew Dataset for ASR
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/8ce7402f6482c6053251d7f3000eec88668c994beb48b7ca7352e77ef810a0b6/train/e429593fede945c185897e378a5839f4198.wav',
'array': array([-0.00265503, -0.0018158 , -0.00149536, ..., -0.00135803,
-0.00231934, -0.00190735]),
'sampling_rate': 16000},
'sentence': 'היא מבינה אותי יותר מכל אחד אחר'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 8000 | 2000 |
| hours | 6.92 | 1.73 |
## Dataset Creation
### Curation Rationale
scraped data from youtube (channel כאן) with removing outliers (by length and ratio between length of the audio and sentences)
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_kan,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Kan},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_kan},
}
```
### Contributions
[More Information Needed] |
limjiayi/hateful_memes_expanded | 2021-12-06T05:17:02.000Z | [
"region:us"
] | limjiayi | null | null | null | 2 | 61 | Entry not found |
nateraw/image-folder | 2021-07-12T03:53:03.000Z | [
"region:us"
] | nateraw | null | null | null | 0 | 61 | Entry not found |
SetFit/amazon_reviews_multi_ja | 2022-03-23T15:40:06.000Z | [
"region:us"
] | SetFit | null | null | null | 1 | 61 | #amazon reviews multi japanese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Japanese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. |
ajders/machine_translated_cnn_dailymail_da_small | 2022-08-26T13:01:36.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:1K<n<10K",
"language:da",
"license:apache-2.0",
"region:us"
] | ajders | null | null | null | 0 | 61 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- da
license:
- apache-2.0
multilinguality:
- translation
pretty_name: machine_translated_cnn_dailymail_da_small
size_categories:
- 1K<n<10K
source_datasets: []
task_categories:
- summarization
task_ids:
- news-articles-summarization
---
# Dataset Card for machine_translated_cnn_dailymail_da_small
### Dataset Summary
This dataset is a machine translated subset of the [CNN Dailymail Dataset](https://huggingface.co/datasets/ccdv/cnn_dailymail) into Danish. The dataset is translated using the [Helsinki-NLP/opus-mt-en-da](https://huggingface.co/Helsinki-NLP/opus-mt-en-da)-model. The dataset consists of 2872 articles with summaries with intended usage for Danish text summarisation.
## Dataset Structure
Machine translated articles (`article`) with corresponding summaries (`highlights`).
```
{
'article': Value(dtype='string', id=None),
'highlights': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None)
}
```
### Licensing Information
The dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). |
elenanereiss/german-ler | 2022-10-26T08:32:17.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"ner, named entity recognition... | elenanereiss | A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. | @misc{https://doi.org/10.48550/arxiv.2003.13016,
doi = {10.48550/ARXIV.2003.13016},
url = {https://arxiv.org/abs/2003.13016},
author = {Leitner, Elena and Rehm, Georg and Moreno-Schneider, Julián},
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A Dataset of German Legal Documents for Named Entity Recognition},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
} | null | 9 | 61 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- de
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: dataset-of-legal-documents
pretty_name: German Named Entity Recognition in Legal Documents
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- ner, named entity recognition, legal ner, legal texts, label classification
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- config: conll2003
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
---
# Dataset Card for "German LER"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/elenanereiss/Legal-Entity-Recognition](https://github.com/elenanereiss/Legal-Entity-Recognition)
- **Paper:** [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf)
- **Point of Contact:** [elena.leitner@dfki.de](elena.leitner@dfki.de)
### Dataset Summary
A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the `BIO` tagging scheme.
The dataset includes two different versions of annotations, one with a set of 19 fine-grained semantic classes (`ner_tags`) and another one with a set of 7 coarse-grained classes (`ner_coarse_tags`). There are 53,632 annotated entities in total, the majority of which (74.34 %) are legal entities, the others are person, location and organization (25.66 %).

For more details see [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf).
### Supported Tasks and Leaderboards
- **Tasks:** Named Entity Recognition
- **Leaderboards:**
### Languages
German
## Dataset Structure
### Data Instances
```python
{
'id': '1',
'tokens': ['Eine', 'solchermaßen', 'verzögerte', 'oder', 'bewusst', 'eingesetzte', 'Verkettung', 'sachgrundloser', 'Befristungen', 'schließt', '§', '14', 'Abs.', '2', 'Satz', '2', 'TzBfG', 'aus', '.'],
'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 38, 38],
'ner_coarse_tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 9, 9, 9, 9, 9, 9, 14, 14]
}
```
### Data Fields
```python
{
'id': Value(dtype='string', id=None),
'tokens': Sequence(feature=Value(dtype='string', id=None),
length=-1, id=None),
'ner_tags': Sequence(feature=ClassLabel(num_classes=39,
names=['B-AN',
'B-EUN',
'B-GRT',
'B-GS',
'B-INN',
'B-LD',
'B-LDS',
'B-LIT',
'B-MRK',
'B-ORG',
'B-PER',
'B-RR',
'B-RS',
'B-ST',
'B-STR',
'B-UN',
'B-VO',
'B-VS',
'B-VT',
'I-AN',
'I-EUN',
'I-GRT',
'I-GS',
'I-INN',
'I-LD',
'I-LDS',
'I-LIT',
'I-MRK',
'I-ORG',
'I-PER',
'I-RR',
'I-RS',
'I-ST',
'I-STR',
'I-UN',
'I-VO',
'I-VS',
'I-VT',
'O'],
id=None),
length=-1,
id=None),
'ner_coarse_tags': Sequence(feature=ClassLabel(num_classes=15,
names=['B-LIT',
'B-LOC',
'B-NRM',
'B-ORG',
'B-PER',
'B-REG',
'B-RS',
'I-LIT',
'I-LOC',
'I-NRM',
'I-ORG',
'I-PER',
'I-REG',
'I-RS',
'O'],
id=None),
length=-1,
id=None)
}
```
### Data Splits
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | 53384 | 6666 | 6673 |
## Dataset Creation
### Curation Rationale
Documents in the legal domain contain multiple references to named entities, especially domain-specific named entities, i. e., jurisdictions, legal institutions, etc. Legal documents are unique and differ greatly from newspaper texts. On the one hand, the occurrence of general-domain named entities is relatively rare. On the other hand, in concrete applications, crucial domain-specific entities need to be identified in a reliable way, such as designations of legal norms and references to other legal documents (laws, ordinances, regulations, decisions, etc.). Most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines.
### Source Data
Court decisions from 2017 and 2018 were selected for the dataset, published online by the [Federal Ministry of Justice and Consumer Protection](http://www.rechtsprechung-im-internet.de). The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
#### Initial Data Collection and Normalization
From the table of [contents](http://www.rechtsprechung-im-internet.de/rii-toc.xml), 107 documents from each court were selected (see Table 1). The data was collected from the XML documents, i. e., it was extracted from the XML elements `Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel`. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed.
The extracted data was split into sentences, tokenised using [SoMaJo](https://github.com/tsproisl/SoMaJo) and manually annotated in [WebAnno](https://webanno.github.io/webanno/).
#### Who are the source language producers?
The Federal Ministry of Justice and the Federal Office of Justice provide selected decisions. Court decisions were produced by humans.
### Annotations
#### Annotation process
For more details see [annotation guidelines](https://github.com/elenanereiss/Legal-Entity-Recognition/blob/master/docs/Annotationsrichtlinien.pdf) (in German).
<!-- #### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
### Personal and Sensitive Information
A fundamental characteristic of the published decisions is that all personal information have been anonymised for privacy reasons. This affects the classes person, location and organization.
<!-- ## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
### Licensing Information
[CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2003.13016,
doi = {10.48550/ARXIV.2003.13016},
url = {https://arxiv.org/abs/2003.13016},
author = {Leitner, Elena and Rehm, Georg and Moreno-Schneider, Julián},
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A Dataset of German Legal Documents for Named Entity Recognition},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
|
GabrielVidal/dead-by-daylight-perks | 2022-11-27T16:06:46.000Z | [
"task_categories:image-classification",
"task_categories:text-to-image",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:openrail",
"de... | GabrielVidal | null | null | null | 1 | 61 | ---
license: openrail
dataset_info:
features:
- name: image
dtype: image
- name: name
dtype: string
- name: type
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 22392351.0
num_examples: 219
download_size: 22365600
dataset_size: 22392351.0
annotations_creators:
- found
language:
- en
language_creators:
- found
multilinguality:
- monolingual
pretty_name: Dead by daylight video game perks
size_categories:
- n<1K
source_datasets:
- original
tags:
- dead by daylight
task_categories:
- image-classification
- text-to-image
task_ids:
- multi-class-image-classification
---
# Dataset Card for Dead by Daylight perks
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
### Dataset Summary
This dataset contains all images (on black background and upscaled to 512x512) of perks from the video game [Dead by Daylight](https://deadbydaylight.com/) with type, name and description (the first sentence) in english.
## Dataset Creation
### Source Data
All images and text have been found online, mainly on the [Dead by Daylight wiki](https://deadbydaylight.fandom.com/wiki/Dead_by_Daylight_Wiki).
## Additional Information
### Licensing Information
All images belong to [Dead by Daylight](https://deadbydaylight.com/).
### Contributions
Thanks to [@GabrielVidal1](https://github.com/GabrielVidal1) for adding this dataset. |
Shunian/kaggle-mbti-cleaned | 2022-12-16T09:46:54.000Z | [
"region:us"
] | Shunian | null | null | null | 2 | 61 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 51657719
num_examples: 327828
- name: test
num_bytes: 12922409
num_examples: 81957
download_size: 42682844
dataset_size: 64580128
---
# Dataset Card for "kaggle-mbti-cleaned"
This dataset originated from Kaggle [(MBTI) Myers-Briggs Personality Type Dataset](https://www.kaggle.com/datasets/datasnaek/mbti-type).
Some cleaning operations are made to this dataset to make it in a usable format for text classification process.
See more detail in [GitHub](https://github.com/nogibjj/MBTI-Personality-Test)
|
fcakyon/gun-object-detection | 2022-12-28T06:22:36.000Z | [
"task_categories:object-detection",
"roboflow",
"region:us"
] | fcakyon | null | @misc{ test-y7rj3_dataset,
title = { test Dataset },
type = { Open Source Dataset },
author = { ashish },
howpublished = { \\url{ https://universe.roboflow.com/ashish-cuamw/test-y7rj3 } },
url = { https://universe.roboflow.com/ashish-cuamw/test-y7rj3 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { oct },
note = { visited on 2022-12-28 },
} | null | 2 | 61 | ---
task_categories:
- object-detection
tags:
- roboflow
---
### Roboflow Dataset Page
https://universe.roboflow.com/ashish-cuamw/test-y7rj3
### Citation
```
@misc{ test-y7rj3_dataset,
title = { test Dataset },
type = { Open Source Dataset },
author = { ashish },
howpublished = { \\url{ https://universe.roboflow.com/ashish-cuamw/test-y7rj3 } },
url = { https://universe.roboflow.com/ashish-cuamw/test-y7rj3 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { oct },
note = { visited on 2022-12-28 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on December 26, 2022 at 10:13 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 4666 images.
T are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
|
MadVoyager/stable_diffusion_instructional_dataset | 2023-04-30T09:55:41.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:conversational",
"language:en",
"stable diffusion",
"llama",
"chatgpt",
"alpaca",
"llm",
"dataset",
"region:us"
] | MadVoyager | null | null | null | 10 | 61 | ---
task_categories:
- question-answering
- text2text-generation
- conversational
language:
- en
tags:
- stable diffusion
- llama
- chatgpt
- alpaca
- llm
- dataset
pretty_name: sd_instruc
--- |
bloyal/small-uniref30 | 2023-05-04T22:13:06.000Z | [
"task_categories:fill-mask",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"region:us"
] | bloyal | null | null | null | 0 | 61 | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: num
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1067207.070393368
num_examples: 4096
- name: test
num_bytes: 167427.70557437633
num_examples: 640
- name: validation
num_bytes: 169382.9274292743
num_examples: 640
download_size: 1368501
dataset_size: 1404017.7033970184
task_categories:
- fill-mask
size_categories:
- 1K<n<10K
--- |
alpayariyak/IAM_Sentences_LLaVA | 2023-05-19T22:04:20.000Z | [
"region:us"
] | alpayariyak | null | null | null | 0 | 61 | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: string
- name: conversations
dtype: string
splits:
- name: train
num_bytes: 1053875995.077
num_examples: 5663
download_size: 1128902513
dataset_size: 1053875995.077
---
# Dataset Card for "IAM_Sentences_LLaVA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lighteval/natural_questions_helm | 2023-05-27T05:33:12.000Z | [
"region:us"
] | lighteval | null | null | null | 2 | 61 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: document
dtype: string
- name: question
dtype: string
- name: long_answers
sequence: string
- name: short_answers
sequence: string
splits:
- name: train
num_bytes: 12495666731
num_examples: 307373
- name: validation
num_bytes: 319900546
num_examples: 7830
download_size: 1733847123
dataset_size: 12815567277
---
# Dataset Card for "natural_questions_helm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HasturOfficial/adgen | 2023-06-04T12:06:50.000Z | [
"region:us"
] | HasturOfficial | null | null | null | 0 | 61 | ---
dataset_info:
features:
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 51127446
num_examples: 114599
- name: validation
num_bytes: 473784
num_examples: 1070
download_size: 27853861
dataset_size: 51601230
---
# Dataset Card for "adgen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
datadrivenscience/movie-genre-prediction | 2023-06-11T10:12:57.000Z | [
"region:us"
] | datadrivenscience | null | null | null | 9 | 61 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: movie_name
dtype: string
- name: synopsis
dtype: string
- name: genre
dtype: string
splits:
- name: train
num_bytes: 10488729
num_examples: 54000
- name: test
num_bytes: 6965864
num_examples: 36000
download_size: 11902232
dataset_size: 17454593
---
# Dataset Card for Movie Genre Prediction
Link to [Movie Genre Prediction Competition](https://huggingface.co/spaces/competitions/movie-genre-prediction)
By accessing this dataset, you accept the rules of the Movie Genre Prediction competition.
# Organizer
Organizer of this competition is [Data-Driven Science](https://datadrivenscience.com/).
[Join our FREE 3-Day Object Detection Challenge!](https://datadrivenscience.com/free-object-detection-challenge/)
<img src="https://datadrivenscience.com/wp-content/uploads/2022/12/DDS-Logo.png" width="200" height="100">
# Email Usage
By accessing this dataset, you consent that your email will be used for communication purposes from Data-Driven Science.
We do not share nor sell our mailing list. Your information remains confidential. You may unsubscribe at any time.
|
hyesunyun/liveqa_medical_trec2017 | 2023-06-20T13:33:44.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"medical",
"region:us"
] | hyesunyun | null | null | null | 0 | 61 | ---
task_categories:
- question-answering
language:
- en
tags:
- medical
pretty_name: LiveQAMedical
size_categories:
- n<1K
---
# Dataset Card for LiveQA Medical from TREC 2017
The LiveQA'17 medical task focuses on consumer health question answering. Consumer health questions were received by the U.S. National Library of Medicine (NLM).
The dataset consists of constructed medical question-answer pairs for training and testing, with additional annotations that can be used to develop question analysis and question answering systems.
Please refer to our overview paper for more information about the constructed datasets and the LiveQA Track:
Asma Ben Abacha, Eugene Agichtein, Yuval Pinter & Dina Demner-Fushman. Overview of the Medical Question Answering Task at TREC 2017 LiveQA. TREC, Gaithersburg, MD, 2017 (https://trec.nist.gov/pubs/trec26/papers/Overview-QA.pdf).
**Homepage:** [https://github.com/abachaa/LiveQA_MedicalTask_TREC2017](https://github.com/abachaa/LiveQA_MedicalTask_TREC2017)
## Medical Training Data
The dataset provides 634 question-answer pairs for training:
1) TREC-2017-LiveQA-Medical-Train-1.xml => 388 question-answer pairs corresponding to 200 NLM questions.
Each question is divided into one or more subquestion(s). Each subquestion has one or more answer(s).
These question-answer pairs were constructed automatically and validated manually.
2) TREC-2017-LiveQA-Medical-Train-2.xml => 246 question-answer pairs corresponding to 246 NLM questions.
Answers were retrieved manually by librarians.
**You can access them as jsonl**
The datasets are not exhaustive with regards to subquestions, i.e., some subquestions might not be annotated.
Additional annotations are provided for both (i) the Focus and (ii) the Question Type used to define each subquestion.
23 question types were considered (e.g. Treatment, Cause, Diagnosis, Indication, Susceptibility, Dosage) related to four focus categories: Disease, Drug, Treatment and Exam.
## Medical Test Data
Test split can be easily downloaded via huggingface.
Test questions cover 26 question types associated with five focus categories.
Each question includes one or more subquestion(s) and at least one focus and one question type.
Reference answers were selected from trusted resources and validated by medical experts.
At least one reference answer is provided for each test question, its URL and relevant comments.
Question paraphrases were created by assessors and used with the reference answers to judge the participants' answers.
```
If you use these datasets, please cite paper:
@inproceedings{LiveMedQA2017,
author = {Asma {Ben Abacha} and Eugene Agichtein and Yuval Pinter and Dina Demner{-}Fushman},
title = {Overview of the Medical Question Answering Task at TREC 2017 LiveQA},
booktitle = {TREC 2017},
year = {2017}
}
``` |
marclove/llama_functions | 2023-08-03T17:31:48.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | marclove | null | null | null | 4 | 61 | ---
license: cc-by-sa-4.0
task_categories:
- conversational
- text-generation
language:
- en
pretty_name: Llama Functions
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://marclove.com
- **Repository:** https://huggingface.co/datasets/marclove/llama_functions
### Dataset Summary
‼️ This dataset is still in a beta state. Its contents, and likely its format, will change. If you need to depend on it in its current state, please create your own fork and provide attribution to this original repository. ‼️
Llama Functions is a synthetic dataset generated from a mix of manual curation of OpenAPI endpoints and prompting of OpenAI models. It is further mixed with chat completions from the Guanaco subset of the OASST1 chat dialogue dataset. It is a total of 18,000 rows, 9,000 rows from the synthetic dataset of function calls and 9,000 rows from the Guanaco dataset.
The dataset is mixed with Guanaco in order to maintain accuracy and helpfulness when calling a function is not the appropriate response. I plan to remove the Guanaco portion of the dataset and instead provide fine-tuning recommendations, guidelines for use, more detailed information regarding limitations, and eval stats of 7B, 13B, and 70B models.
There is no existing evaluation benchmark to measure the accuracy of function calls, which makes it hard during training to identify when we've maximized the balance of function calling accuracy and chat model performance. I'm working on a custom HF eval for this purpose, but until then I have chosen to mix the two datasets in equal parts to get a proxy of performance for both tasks in the eval & test stats during fine-tuning.
### Languages
English primarily, though since it has been mixed with the multilingual Guanaco dataset, other languages are included.
## Dataset Structure
### Data Fields
| Field | Description |
|-------|-------------|
| `input` |A prompt in Llama-2 Chat format, including an appropriate system instruction and chat history. |
| `output` | The expected completion. |
### Data Splits
There are currently no splits, but future versions will likely have train, eval, and test splits.
## Dataset Creation
### Curation Rationale
In an effort to enable tool-using chat agents and autonomous agents, I developed this synthetic dataset to bring [OpenAI-style function calling](https://openai.com/blog/function-calling-and-other-api-updates#function-calling) to the Llama family and to fully open source models.
### Source Data
The data was sourced by prompting OpenAI models to generate function calls of:
1. Real OpenAPI endpoints collected and filtered from the web
2. Manually written (but artificial) OpenAPI endpoints, and
3. Prompted iterations of 1 & 2.
Prompted iterations were generated by ChatGPT-4 (July 20, 2023 version). Generated function calls and their natural language counterparts were generated by iterative prompting of `gpt-3.5-turbo-0301`. A blog post detailing the generation process will be published in the next few days.
OpenAI's TOS give me ownership of this synthetic dataset. I am licensing it under [Creative Commons' Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license](https://creativecommons.org/licenses/by-sa/4.0/). I have used the dataset to fine tune a research-only model, [marclove/llama-2-7b-chat-functions](https://huggingface.co/marclove/llama-2-7b-chat-functions), per OpenAI TOS. You are responsible for determining whether you can use the dataset for your particular use case. I take no responsibility and make no guarantees beyond licensing my own rights under the designated CC license.
#### Who are the source language producers?
- Marc Love
- Prompting of ChatGPT-4 & API calls to gpt-3.5-turbo-0301
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
Unknown, beyond those of the [Guanaco subset of the OASST1 dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco/viewer/timdettmers--openassistant-guanaco/).
### Discussion of Biases
Unknown, beyond those of the [Guanaco subset of the OASST1 dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco/viewer/timdettmers--openassistant-guanaco/).
### Other Known Limitations
Fine-tuning on this dataset can lead to hallucinated function calls. This is more pronounced in smaller models.
## Additional Information
### Dataset Curators
Marc Love
### Licensing Information
[Creative Commons' Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license](https://creativecommons.org/licenses/by-sa/4.0/). Please note that the synthetic data portion of the dataset was generated using OpenAI models, which may or may not impact your ability to use the dataset, depending on your use case.
### Citation Information
If you use this dataset, please cite:
```
@misc{LlamaFunctions,
title = {LlamaFunctions: An Open Dataset of Structured API Calls From Natural Language Prompts},
author = {Marc Love},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/marclove/llama_functions},
}
``` |
augustocsc/prim_fwd_short | 2023-08-15T13:32:14.000Z | [
"region:us"
] | augustocsc | null | null | null | 0 | 61 | Entry not found |
dim/essayforum_writing_prompts_6k | 2023-08-16T20:37:43.000Z | [
"region:us"
] | dim | null | null | null | 1 | 61 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 21696702
num_examples: 6361
download_size: 11796178
dataset_size: 21696702
---
# Dataset Card for "essayforum_writing_prompts_6k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/openreview_prompts_65 | 2023-08-20T20:33:33.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 61 | ---
license: mit
dataset_info:
features:
- name: full_review
dtype: string
- name: latex
dtype: string
- name: paper_url
dtype: string
- name: arxiv_url
dtype: string
- name: help_prompt
dtype: string
splits:
- name: train
num_bytes: 6752074
num_examples: 150
download_size: 1488188
dataset_size: 6752074
---
|
dim/kinomania_scripts | 2023-08-20T21:35:44.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 61 | ---
license: mit
dataset_info:
features:
- name: movie_script
dtype: string
- name: movie_description
dtype: string
- name: title
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 4912326
num_examples: 27
download_size: 2757276
dataset_size: 4912326
---
|
dim/bugurt_thread_prompts | 2023-09-01T23:13:38.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 61 | ---
license: mit
dataset_info:
features:
- name: bugurt
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 301299
num_examples: 223
download_size: 159463
dataset_size: 301299
---
|
dim/russian_lyrics_prompts | 2023-08-21T01:23:59.000Z | [
"region:us"
] | dim | null | null | null | 0 | 61 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 18504
num_examples: 43
download_size: 14764
dataset_size: 18504
---
# Dataset Card for "russian_lyrics_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_openlm-research__open_llama_7b_v2 | 2023-08-28T20:33:12.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 61 | ---
pretty_name: Evaluation run of None
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [None](https://huggingface.co/None) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 119 configuration, each one coresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run (and is used to compute and display the agregated metrics on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openlm-research__open_llama_7b_v2\"\
,\n\t\"original_mmlu_world_religions_5\",\n\tsplit=\"train\")\n```\n\n## Latest\
\ results\n\nThese are the [latest results from run 2023-08-28T20:32:57.598943](https://huggingface.co/datasets/open-llm-leaderboard/details_openlm-research__open_llama_7b_v2/blob/main/results_2023-08-28T20%3A32%3A57.598943.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4117296915241362,\n\
\ \"acc_stderr\": 0.03615334441058037\n },\n \"original|mmlu:abstract_algebra|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741\n },\n\
\ \"original|mmlu:anatomy|5\": {\n \"acc\": 0.43703703703703706,\n \
\ \"acc_stderr\": 0.04284958639753399\n },\n \"original|mmlu:astronomy|5\"\
: {\n \"acc\": 0.4342105263157895,\n \"acc_stderr\": 0.04033565667848319\n\
\ },\n \"original|mmlu:business_ethics|5\": {\n \"acc\": 0.41,\n \
\ \"acc_stderr\": 0.049431107042371025\n },\n \"original|mmlu:clinical_knowledge|5\"\
: {\n \"acc\": 0.4641509433962264,\n \"acc_stderr\": 0.030693675018458006\n\
\ },\n \"original|mmlu:college_biology|5\": {\n \"acc\": 0.4236111111111111,\n\
\ \"acc_stderr\": 0.041321250197233685\n },\n \"original|mmlu:college_chemistry|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845\n },\n\
\ \"original|mmlu:college_computer_science|5\": {\n \"acc\": 0.35,\n \
\ \"acc_stderr\": 0.0479372485441102\n },\n \"original|mmlu:college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045\n },\n\
\ \"original|mmlu:college_medicine|5\": {\n \"acc\": 0.3930635838150289,\n\
\ \"acc_stderr\": 0.03724249595817729\n },\n \"original|mmlu:college_physics|5\"\
: {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.041583075330832865\n\
\ },\n \"original|mmlu:computer_security|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.049604496374885836\n },\n \"original|mmlu:conceptual_physics|5\"\
: {\n \"acc\": 0.33617021276595743,\n \"acc_stderr\": 0.030881618520676942\n\
\ },\n \"original|mmlu:econometrics|5\": {\n \"acc\": 0.2982456140350877,\n\
\ \"acc_stderr\": 0.043036840335373146\n },\n \"original|mmlu:electrical_engineering|5\"\
: {\n \"acc\": 0.4413793103448276,\n \"acc_stderr\": 0.04137931034482758\n\
\ },\n \"original|mmlu:elementary_mathematics|5\": {\n \"acc\": 0.28835978835978837,\n\
\ \"acc_stderr\": 0.0233306540545359\n },\n \"original|mmlu:formal_logic|5\"\
: {\n \"acc\": 0.3412698412698413,\n \"acc_stderr\": 0.04240799327574924\n\
\ },\n \"original|mmlu:global_facts|5\": {\n \"acc\": 0.33,\n \
\ \"acc_stderr\": 0.04725815626252605\n },\n \"original|mmlu:high_school_biology|5\"\
: {\n \"acc\": 0.43870967741935485,\n \"acc_stderr\": 0.028229497320317213\n\
\ },\n \"original|mmlu:high_school_chemistry|5\": {\n \"acc\": 0.24630541871921183,\n\
\ \"acc_stderr\": 0.03031509928561773\n },\n \"original|mmlu:high_school_computer_science|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236\n },\n\
\ \"original|mmlu:high_school_european_history|5\": {\n \"acc\": 0.4484848484848485,\n\
\ \"acc_stderr\": 0.038835659779569286\n },\n \"original|mmlu:high_school_geography|5\"\
: {\n \"acc\": 0.4595959595959596,\n \"acc_stderr\": 0.035507024651313425\n\
\ },\n \"original|mmlu:high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.5699481865284974,\n \"acc_stderr\": 0.03572954333144808\n \
\ },\n \"original|mmlu:high_school_macroeconomics|5\": {\n \"acc\":\
\ 0.39487179487179486,\n \"acc_stderr\": 0.02478431694215638\n },\n \
\ \"original|mmlu:high_school_mathematics|5\": {\n \"acc\": 0.23703703703703705,\n\
\ \"acc_stderr\": 0.025928876132766114\n },\n \"original|mmlu:high_school_microeconomics|5\"\
: {\n \"acc\": 0.3739495798319328,\n \"acc_stderr\": 0.03142946637883708\n\
\ },\n \"original|mmlu:high_school_physics|5\": {\n \"acc\": 0.2980132450331126,\n\
\ \"acc_stderr\": 0.037345356767871984\n },\n \"original|mmlu:high_school_psychology|5\"\
: {\n \"acc\": 0.5266055045871559,\n \"acc_stderr\": 0.021406952688151574\n\
\ },\n \"original|mmlu:high_school_statistics|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.0305467452649532\n },\n \"original|mmlu:high_school_us_history|5\"\
: {\n \"acc\": 0.4362745098039216,\n \"acc_stderr\": 0.03480693138457038\n\
\ },\n \"original|mmlu:high_school_world_history|5\": {\n \"acc\":\
\ 0.48945147679324896,\n \"acc_stderr\": 0.032539983791662855\n },\n \
\ \"original|mmlu:human_aging|5\": {\n \"acc\": 0.4170403587443946,\n \
\ \"acc_stderr\": 0.03309266936071721\n },\n \"original|mmlu:human_sexuality|5\"\
: {\n \"acc\": 0.48091603053435117,\n \"acc_stderr\": 0.043820947055509867\n\
\ },\n \"original|mmlu:international_law|5\": {\n \"acc\": 0.5041322314049587,\n\
\ \"acc_stderr\": 0.04564198767432754\n },\n \"original|mmlu:jurisprudence|5\"\
: {\n \"acc\": 0.5092592592592593,\n \"acc_stderr\": 0.04832853553437055\n\
\ },\n \"original|mmlu:logical_fallacies|5\": {\n \"acc\": 0.3803680981595092,\n\
\ \"acc_stderr\": 0.03814269893261837\n },\n \"original|mmlu:machine_learning|5\"\
: {\n \"acc\": 0.3482142857142857,\n \"acc_stderr\": 0.04521829902833586\n\
\ },\n \"original|mmlu:management|5\": {\n \"acc\": 0.5631067961165048,\n\
\ \"acc_stderr\": 0.04911147107365777\n },\n \"original|mmlu:marketing|5\"\
: {\n \"acc\": 0.5854700854700855,\n \"acc_stderr\": 0.03227396567623779\n\
\ },\n \"original|mmlu:medical_genetics|5\": {\n \"acc\": 0.54,\n \
\ \"acc_stderr\": 0.05009082659620333\n },\n \"original|mmlu:miscellaneous|5\"\
: {\n \"acc\": 0.5747126436781609,\n \"acc_stderr\": 0.017679225489431457\n\
\ },\n \"original|mmlu:moral_disputes|5\": {\n \"acc\": 0.43641618497109824,\n\
\ \"acc_stderr\": 0.026700545424943677\n },\n \"original|mmlu:moral_scenarios|5\"\
: {\n \"acc\": 0.24804469273743016,\n \"acc_stderr\": 0.01444415780826144\n\
\ },\n \"original|mmlu:nutrition|5\": {\n \"acc\": 0.4411764705882353,\n\
\ \"acc_stderr\": 0.028431095444176643\n },\n \"original|mmlu:philosophy|5\"\
: {\n \"acc\": 0.3890675241157556,\n \"acc_stderr\": 0.027690337536485376\n\
\ },\n \"original|mmlu:prehistory|5\": {\n \"acc\": 0.43209876543209874,\n\
\ \"acc_stderr\": 0.02756301097160668\n },\n \"original|mmlu:professional_accounting|5\"\
: {\n \"acc\": 0.3120567375886525,\n \"acc_stderr\": 0.02764012054516993\n\
\ },\n \"original|mmlu:professional_law|5\": {\n \"acc\": 0.3324641460234681,\n\
\ \"acc_stderr\": 0.012032022332260512\n },\n \"original|mmlu:professional_medicine|5\"\
: {\n \"acc\": 0.44485294117647056,\n \"acc_stderr\": 0.030187532060329387\n\
\ },\n \"original|mmlu:professional_psychology|5\": {\n \"acc\": 0.3709150326797386,\n\
\ \"acc_stderr\": 0.019542101564854114\n },\n \"original|mmlu:public_relations|5\"\
: {\n \"acc\": 0.4727272727272727,\n \"acc_stderr\": 0.04782001791380063\n\
\ },\n \"original|mmlu:security_studies|5\": {\n \"acc\": 0.4489795918367347,\n\
\ \"acc_stderr\": 0.03184213866687579\n },\n \"original|mmlu:sociology|5\"\
: {\n \"acc\": 0.5572139303482587,\n \"acc_stderr\": 0.03512310964123937\n\
\ },\n \"original|mmlu:us_foreign_policy|5\": {\n \"acc\": 0.54,\n\
\ \"acc_stderr\": 0.05009082659620333\n },\n \"original|mmlu:virology|5\"\
: {\n \"acc\": 0.40963855421686746,\n \"acc_stderr\": 0.03828401115079023\n\
\ },\n \"original|mmlu:world_religions|5\": {\n \"acc\": 0.5497076023391813,\n\
\ \"acc_stderr\": 0.03815827365913236\n }\n}\n```"
repo_url: https://huggingface.co/None
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|arc:challenge|25_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|arc:challenge|25_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hellaswag|10_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hellaswag|10_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-23T15:22:49.203021.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-23T16:40:42.128714.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-23T16:40:42.128714.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-23T15:22:49.203021.parquet'
- split: 2023_08_23T16_40_42.128714
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-23T16:40:42.128714.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-23T16:40:42.128714.parquet'
- config_name: original_mmlu_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:32:57.598943.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_abstract_algebra_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_anatomy_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_astronomy_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_business_ethics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_clinical_knowledge_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_college_biology_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_college_chemistry_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_college_computer_science_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_college_mathematics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_college_medicine_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_college_physics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_computer_security_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_conceptual_physics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_econometrics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_electrical_engineering_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_elementary_mathematics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_formal_logic_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_global_facts_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_biology_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_chemistry_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_computer_science_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_european_history_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_geography_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_government_and_politics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_macroeconomics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_mathematics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_microeconomics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_physics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_psychology_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_statistics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_us_history_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_high_school_world_history_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_human_aging_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_human_sexuality_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_international_law_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_jurisprudence_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_logical_fallacies_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_machine_learning_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_management_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_marketing_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_medical_genetics_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_miscellaneous_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_moral_disputes_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_moral_scenarios_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_nutrition_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_philosophy_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_prehistory_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_professional_accounting_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_professional_law_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_professional_medicine_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_professional_psychology_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_public_relations_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_security_studies_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_sociology_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_us_foreign_policy_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_virology_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:32:57.598943.parquet'
- config_name: original_mmlu_world_religions_5
data_files:
- split: 2023_08_28T20_32_57.598943
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:32:57.598943.parquet'
- split: latest
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:32:57.598943.parquet'
- config_name: results
data_files:
- split: 2023_08_23T15_22_49.203021
path:
- results_2023-08-23T15:22:49.203021.parquet
- split: 2023_08_23T16_40_42.128714
path:
- results_2023-08-23T16:40:42.128714.parquet
- split: 2023_08_28T20_32_57.598943
path:
- results_2023-08-28T20:32:57.598943.parquet
- split: latest
path:
- results_2023-08-28T20:32:57.598943.parquet
---
# Dataset Card for Evaluation run of None
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/None
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [None](https://huggingface.co/None) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 119 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openlm-research__open_llama_7b_v2",
"original_mmlu_world_religions_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-08-28T20:32:57.598943](https://huggingface.co/datasets/open-llm-leaderboard/details_openlm-research__open_llama_7b_v2/blob/main/results_2023-08-28T20%3A32%3A57.598943.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4117296915241362,
"acc_stderr": 0.03615334441058037
},
"original|mmlu:abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741
},
"original|mmlu:anatomy|5": {
"acc": 0.43703703703703706,
"acc_stderr": 0.04284958639753399
},
"original|mmlu:astronomy|5": {
"acc": 0.4342105263157895,
"acc_stderr": 0.04033565667848319
},
"original|mmlu:business_ethics|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025
},
"original|mmlu:clinical_knowledge|5": {
"acc": 0.4641509433962264,
"acc_stderr": 0.030693675018458006
},
"original|mmlu:college_biology|5": {
"acc": 0.4236111111111111,
"acc_stderr": 0.041321250197233685
},
"original|mmlu:college_chemistry|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845
},
"original|mmlu:college_computer_science|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102
},
"original|mmlu:college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045
},
"original|mmlu:college_medicine|5": {
"acc": 0.3930635838150289,
"acc_stderr": 0.03724249595817729
},
"original|mmlu:college_physics|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.041583075330832865
},
"original|mmlu:computer_security|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836
},
"original|mmlu:conceptual_physics|5": {
"acc": 0.33617021276595743,
"acc_stderr": 0.030881618520676942
},
"original|mmlu:econometrics|5": {
"acc": 0.2982456140350877,
"acc_stderr": 0.043036840335373146
},
"original|mmlu:electrical_engineering|5": {
"acc": 0.4413793103448276,
"acc_stderr": 0.04137931034482758
},
"original|mmlu:elementary_mathematics|5": {
"acc": 0.28835978835978837,
"acc_stderr": 0.0233306540545359
},
"original|mmlu:formal_logic|5": {
"acc": 0.3412698412698413,
"acc_stderr": 0.04240799327574924
},
"original|mmlu:global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605
},
"original|mmlu:high_school_biology|5": {
"acc": 0.43870967741935485,
"acc_stderr": 0.028229497320317213
},
"original|mmlu:high_school_chemistry|5": {
"acc": 0.24630541871921183,
"acc_stderr": 0.03031509928561773
},
"original|mmlu:high_school_computer_science|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236
},
"original|mmlu:high_school_european_history|5": {
"acc": 0.4484848484848485,
"acc_stderr": 0.038835659779569286
},
"original|mmlu:high_school_geography|5": {
"acc": 0.4595959595959596,
"acc_stderr": 0.035507024651313425
},
"original|mmlu:high_school_government_and_politics|5": {
"acc": 0.5699481865284974,
"acc_stderr": 0.03572954333144808
},
"original|mmlu:high_school_macroeconomics|5": {
"acc": 0.39487179487179486,
"acc_stderr": 0.02478431694215638
},
"original|mmlu:high_school_mathematics|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.025928876132766114
},
"original|mmlu:high_school_microeconomics|5": {
"acc": 0.3739495798319328,
"acc_stderr": 0.03142946637883708
},
"original|mmlu:high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984
},
"original|mmlu:high_school_psychology|5": {
"acc": 0.5266055045871559,
"acc_stderr": 0.021406952688151574
},
"original|mmlu:high_school_statistics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.0305467452649532
},
"original|mmlu:high_school_us_history|5": {
"acc": 0.4362745098039216,
"acc_stderr": 0.03480693138457038
},
"original|mmlu:high_school_world_history|5": {
"acc": 0.48945147679324896,
"acc_stderr": 0.032539983791662855
},
"original|mmlu:human_aging|5": {
"acc": 0.4170403587443946,
"acc_stderr": 0.03309266936071721
},
"original|mmlu:human_sexuality|5": {
"acc": 0.48091603053435117,
"acc_stderr": 0.043820947055509867
},
"original|mmlu:international_law|5": {
"acc": 0.5041322314049587,
"acc_stderr": 0.04564198767432754
},
"original|mmlu:jurisprudence|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.04832853553437055
},
"original|mmlu:logical_fallacies|5": {
"acc": 0.3803680981595092,
"acc_stderr": 0.03814269893261837
},
"original|mmlu:machine_learning|5": {
"acc": 0.3482142857142857,
"acc_stderr": 0.04521829902833586
},
"original|mmlu:management|5": {
"acc": 0.5631067961165048,
"acc_stderr": 0.04911147107365777
},
"original|mmlu:marketing|5": {
"acc": 0.5854700854700855,
"acc_stderr": 0.03227396567623779
},
"original|mmlu:medical_genetics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333
},
"original|mmlu:miscellaneous|5": {
"acc": 0.5747126436781609,
"acc_stderr": 0.017679225489431457
},
"original|mmlu:moral_disputes|5": {
"acc": 0.43641618497109824,
"acc_stderr": 0.026700545424943677
},
"original|mmlu:moral_scenarios|5": {
"acc": 0.24804469273743016,
"acc_stderr": 0.01444415780826144
},
"original|mmlu:nutrition|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.028431095444176643
},
"original|mmlu:philosophy|5": {
"acc": 0.3890675241157556,
"acc_stderr": 0.027690337536485376
},
"original|mmlu:prehistory|5": {
"acc": 0.43209876543209874,
"acc_stderr": 0.02756301097160668
},
"original|mmlu:professional_accounting|5": {
"acc": 0.3120567375886525,
"acc_stderr": 0.02764012054516993
},
"original|mmlu:professional_law|5": {
"acc": 0.3324641460234681,
"acc_stderr": 0.012032022332260512
},
"original|mmlu:professional_medicine|5": {
"acc": 0.44485294117647056,
"acc_stderr": 0.030187532060329387
},
"original|mmlu:professional_psychology|5": {
"acc": 0.3709150326797386,
"acc_stderr": 0.019542101564854114
},
"original|mmlu:public_relations|5": {
"acc": 0.4727272727272727,
"acc_stderr": 0.04782001791380063
},
"original|mmlu:security_studies|5": {
"acc": 0.4489795918367347,
"acc_stderr": 0.03184213866687579
},
"original|mmlu:sociology|5": {
"acc": 0.5572139303482587,
"acc_stderr": 0.03512310964123937
},
"original|mmlu:us_foreign_policy|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333
},
"original|mmlu:virology|5": {
"acc": 0.40963855421686746,
"acc_stderr": 0.03828401115079023
},
"original|mmlu:world_religions|5": {
"acc": 0.5497076023391813,
"acc_stderr": 0.03815827365913236
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
miazhao/prm800k_processed_preference | 2023-09-04T00:10:16.000Z | [
"region:us"
] | miazhao | null | null | null | 0 | 61 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: responses
sequence: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 23805614
num_examples: 22036
download_size: 9396871
dataset_size: 23805614
---
# Dataset Card for "prm800k_processed_preference"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nampdn-ai/mini-coder | 2023-09-21T04:57:45.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"source_datasets:bigcode/starcoderdata",
"language:en",
"license:other",
"region:us"
] | nampdn-ai | null | null | null | 3 | 61 | ---
license: other
task_categories:
- text-generation
language:
- en
pretty_name: Mini Coder
size_categories:
- 1M<n<10M
source_datasets:
- bigcode/starcoderdata
---
The Mini-Coder dataset is a 2.2 million (~8GB) filtered selection of code snippets from the [bigcode/starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata) dataset, serving as a seed for synthetic dataset generation.
Each snippet is chosen for its clarity, presence of comments, and inclusion of at least one `if/else` or `switch case` statement
This repository is particularly useful for ML researchers in the field of making synthetic dataset. |
HydraLM/corpus_1 | 2023-09-08T19:39:51.000Z | [
"region:us"
] | HydraLM | null | null | null | 3 | 61 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_conversation_id
dtype: string
splits:
- name: train
num_bytes: 5194729893
num_examples: 6320610
download_size: 2478345344
dataset_size: 5194729893
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "corpus_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Siddharthr30/notable_take_home | 2023-09-12T02:16:21.000Z | [
"region:us"
] | Siddharthr30 | null | null | null | 0 | 61 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 671512
num_examples: 2628
- name: validation
num_bytes: 222336
num_examples: 876
- name: test
num_bytes: 226127
num_examples: 876
download_size: 0
dataset_size: 1119975
---
# Dataset Card for "notable_take_home"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
maximegmd/MedText-alpaca | 2023-09-14T09:23:08.000Z | [
"region:us"
] | maximegmd | null | null | null | 0 | 61 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 949136
num_examples: 1412
download_size: 494828
dataset_size: 949136
---
# Dataset Card for "MedText-alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YanatPlayz/TennisLLMv1 | 2023-09-21T03:04:21.000Z | [
"region:us"
] | YanatPlayz | null | null | null | 0 | 61 | Entry not found |
TurkuNLP/turku_paraphrase_corpus | 2022-07-01T15:25:27.000Z | [
"task_categories:text-classification",
"task_categories:sentence-similarity",
"task_categories:text2text-generation",
"task_categories:other",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_d... | TurkuNLP | Turku Paraphrase Corpus is a dataset of 104,645 manually annotated Finnish paraphrases. The vast majority of the data is classified as a paraphrase either in the given context, or universally. | @inproceedings{kanerva-etal-2021-finnish,
title = {Finnish Paraphrase Corpus},
author = {Kanerva, Jenna and Ginter, Filip and Chang, Li-Hsin and Rastas, Iiro and Skantsi, Valtteri and Kilpeläinen, Jemina and Kupari, Hanna-Mari and Saarni, Jenna and Sevón, Maija and Tarkka, Otto},
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa'21)},
year = {2021},
publisher = {Linköping University Electronic Press, Sweden},
url = {https://aclanthology.org/2021.nodalida-main.29},
pages = {288--298}
} | null | 2 | 60 | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators: []
language:
- fi
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Turku Paraphrase Corpus
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- sentence-similarity
- text2text-generation
- other
task_ids:
- semantic-similarity-classification
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://turkunlp.org/paraphrase.html
- **Repository:** https://github.com/TurkuNLP/Turku-paraphrase-corpus
- **Paper:** https://aclanthology.org/2021.nodalida-main.29
- **Leaderboard:** Not available
- **Point of Contact:** [Jenna Kanerva, Filip Ginter](mailto:jmnybl@utu.fi,filip.ginter@gmail.com)
### Dataset Summary
The project gathered a large dataset of Finnish paraphrase pairs (over 100,000). The paraphrases are selected and classified manually, so as to minimize lexical overlap, and provide examples that are maximally structurally and lexically different. The objective is to create a dataset which is challenging and better tests the capabilities of natural language understanding. An important feature of the data is that most paraphrase pairs are distributed in their document context. The primary application for the dataset is the development and evaluation of deep language models, and representation learning in general.
Usage:
```
from datasets import load_dataset
dataset = load_dataset('TurkuNLP/turku_paraphrase_corpus', name="plain")
```
where `name` is one of the supported loading options: `plain`, `plain-context`, `classification`, `classification-context`, or `generation`. See Data Fields for more information.
### Supported Tasks and Leaderboards
* Paraphrase classification
* Paraphrase generation
### Languages
Finnish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
The dataset consist of pairs of text passages, where a typical passage is about a sentence long, however, a passage may also be longer or shorter than a sentence. Thus, each example includes two text passages (string), a manually annotated label to indicate the paraphrase type (string), and additional metadata. The dataset includes three different configurations: `plain`, `classification`, and `generation`. The `plain` configuration loads the original data without any additional preprocessing or transformations, while the `classification` configuration directly builds the data in a form suitable for training a paraphrase classifier, where each example is doubled in the data with different directions (text1, text2, label) --> (text2, text1, label) taking care of the label flipping as well if needed (paraphrases with directionality flag < or >). In the `generation` configuration, the examples are preprocessed to be directly suitable for the paraphrase generation task. In here, paraphrases not suitable for generation are discarded (negative, and highly context-dependent paraphrases), and directional paraphrases are provided so that the generation goes from more detailed passage to the more general one in order to prevent model hallucination (i.e. model learning to introduce new information). The rest of the paraphrases are provided in both directions (text1, text2, label) --> (text2, text1, label).
Each pair in the `plain` and `classification` configurations will include fields:
`id`:
Identifier of the paraphrase pair (string)
`gem_id`:
Identifier of the paraphrase pair in the GEM dataset (string)
`goeswith`:
Identifier of the document from which the paraphrase was extracted, can be `not available` in case the source of the paraphrase is not from document-structured data. All examples with the same `goeswith` value (other than `not available`) should be kept together in any train/dev/test split; most users won't need this (string)
`fold`:
0-99, data split into 100 parts respecting document boundaries, you can use this e.g. to implement crossvalidation safely as all paraphrases from one document are in one fold, most users won't need this (int)
`text1`:
First paraphrase passage (string)
`text2`:
Second paraphrase passage (string)
`label`:
Manually annotated labels (string)
`binary_label`:
Label turned into binary with values `positive` (paraphrase) and `negative` (not-paraphrase) (string)
`is_rewrite`:
Indicator whether the example is human produced rewrite or naturally occurring paraphrase (bool)
Each pair in the `generation` config will include the same fields except `text1` and `text2` are renamed to `input` and `output` in order to indicate the generation direction. Thus the fields are: `id`, `gem_id`, `goeswith`, `fold`, `input`, `output`, `label`, `binary_label`, and `is_rewrite`
**Context**: Most (but not all) of the paraphrase pairs are identified in their document context. By default, these contexts are not included to conserve memory, but can be accessed using the configurations `plain-context` and `classification-context`. These are exactly like `plain` and `classification` with these additional fields:
`context1`:
a dictionary with the fields `doctext` (string), `begin` (int), `end` (int). These mean that the paraphrase in `text1` was extracted from `doctext[begin:end]`. In most cases, `doctext[begin:end]` and `text1` are the exact same string, but occassionally that is not the case when e.g. intervening punctuations or other unrelated texts were "cleaned" from `text1` during annotation. In case the context is not available, `doctext` is an empty string and `beg==end==0`
`context2`:
same as `context1` but for `text2`
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jmnybl](https://github.com/jmnybl) and [@fginter](https://github.com/fginter) for adding this dataset. |
jimregan/clarinpl_studio | 2023-01-21T12:27:08.000Z | [
"task_categories:other",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:other",
"arxiv:1706.00245",
"region:us"
] | jimregan | The corpus consists of 317 speakers recorded in 554
sessions, where each session consists of 20 read sentences and 10 phonetically rich words. The size of
the audio portion of the corpus amounts to around 56 hours, with transcriptions containing 356674 words
from a vocabulary of size 46361.
Note that in order to limit the required storage for preparing this dataset, the audio
is stored in the .wav format and is not converted to a float32 array. To convert the audio
file to a float32 array, please make use of the `.map()` function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @article{korvzinek2017polish,
title={Polish read speech corpus for speech tools and services},
author={Kor{\v{z}}inek, Danijel and Marasek, Krzysztof and Brocki, {\L}ukasz and Wo{\l}k, Krzysztof},
journal={arXiv preprint arXiv:1706.00245},
year={2017}
} | null | 1 | 60 | ---
annotations_creators:
- expert-generated
language:
- pl
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for ClarinPL Studio Speech Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CLARIN-PL mowa](https://mowa.clarin-pl.eu/)
- **Repository:** [Kaldi Baseline](https://github.com/danijel3/ClarinStudioKaldi)
- **Paper:** [Polish Read Speech Corpus for Speech Tools and Services](https://arxiv.org/abs/1706.00245)
- **Leaderboard:** [Paperswithcode Leaderboard][Needs More Information]
- **Point of Contact:** [Danijel Koržinek](https://github.com/danijel3/)
### Dataset Summary
The corpus consists of 317 speakers recorded in 554
sessions, where each session consists of 20 read sentences and 10 phonetically rich words. The size of
the audio portion of the corpus amounts to around 56 hours, with transcriptions containing 356674 words
from a vocabulary of size 46361.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Polish.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`.
An example from the dataset is:
```
{'file': '/root/.cache/huggingface/datasets/downloads/extracted/333ddc746f2df1e1d19b44986992d4cbe28710fde81d533a220e755ee6c5c519/audio/SES0001/rich001.wav',
'id': 'SES0001_rich001',
'speaker_id': 'SPK0001',
'text': 'drożdże dżip gwożdżenie ozimina wędzarz rdzeń wędzonka ingerować kładzenie jutrzenka'}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- text: the transcription of the audio file.
- speaker_id: The ID of the speaker of the audio.
### Data Splits
| | Train | Test | Valid |
| ----- | ----- | ---- | ----- |
| dataset | 11222 | 1362 | 1229 |
## Dataset Creation
### Curation Rationale
The purpose of this segment of the project was to develop specific tools that would allow for automatic and semi-automatic processing of large quantities of acoustic speech data. Another purpose of the corpus was to serve as a reference for studies in phonetics and pronunciation.
### Source Data
#### Initial Data Collection and Normalization
The corpus was recorded in a studio environment using two microphones: a high-quality studio microphone and a typical consumer audio headset.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[CLARIN PUB+BY+INF+NORED](https://mowa.clarin-pl.eu/korpusy/LICENSE)
### Citation Information
```
@article{korvzinek2017polish,
title={Polish read speech corpus for speech tools and services},
author={Kor{\v{z}}inek, Danijel and Marasek, Krzysztof and Brocki, {\L}ukasz and Wo{\l}k, Krzysztof},
journal={arXiv preprint arXiv:1706.00245},
year={2017}
}
```
### Contributions
[Needs More Information]
|
yhavinga/mc4_nl_cleaned | 2022-12-16T09:24:34.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:en-nl",
"source_datasets:extended",
"language:nl",
"language:en",
"license:odc-by",
"arxiv:1910.10683",
"region:us"
... | yhavinga | A thoroughly cleaned version of the Dutch portion of the multilingual
colossal, cleaned version of Common Crawl's web crawl corpus (mC4) by AllenAI.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's mC4 dataset by AllenAI, with further cleaning
detailed in the repository README file. | @article{JMLR:v21:20-074,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
} | null | 7 | 60 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- nl
- en
license:
- odc-by
multilinguality:
- monolingual
- en-nl
size_categories:
micro:
- 120k
tiny:
- 1M<n<10M
small:
- 10M<n<100M
medium:
- 10M<n<100M
large:
- 10M<n<100M
full:
- 100M<n<1B
source_datasets:
- extended
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: mc4
pretty_name: mC4_nl_cleaned
---
# Dataset Card for Clean Dutch mC4
## Table of Contents
- [Dataset Card for Clean](#dataset-card-for-mc4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Preprocessing](#preprocessing)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4).
While this dataset is monolingual, it is possible to download `en-nl` interleaved data, see the Dataset Config section below.
Based on the [Common Crawl dataset](https://commoncrawl.org).
The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
### Preprocessing
The Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version.
See [GitLab](https://gitlab.com/yhavinga/c4nlpreproc) for details.
In summary, the preprocessing procedure includes:
- Removing documents containing words from a selection of the [Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words).
- Removing sentences containing:
- Less than 3 words.
- A word longer than 250 characters.
- An end symbol not matching end-of-sentence punctuation.
- Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in Dutch or English.
- Removing documents (after sentence filtering):
- Containing less than 5 sentences.
- Containing less than 500 or more than 50'000 characters.
- Not identified as prevalently Dutch by the `LangDetect` package.
Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch
shards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence
tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure.
## Dataset Structure
### Data Instances
An example from the dataset:
```
{
'timestamp': '2019-02-22T15:37:25Z',
'url': 'https://ondernemingen.bnpparibasfortis.be/nl/artikel?n=vijf-gouden-tips-voor-succesvol-zaken-doen-met-japan',
'text': 'Japanse bedrijven zijn niet alleen hondstrouw aan hun leveranciers , ze betalen ook nog eens erg stipt. Alleen is het niet zo makkelijk er een voet tussen de deur te krijgen. Met de volgende tips hebt u alvast een streepje voor.\nIn Japan draait alles om vertrouwen. Neem voldoende tijd om een relatie op te bouwen.Aarzel niet om tijdig een lokale vertrouwenspersoon in te schakelen.\nJapan is een erg competitieve markt.Kwaliteit en prijs zijn erg belangrijk, u zult dus het beste van uzelf moeten geven. Gelukkig is de beloning groot. Japanse zakenlui zijn loyaal en betalen stipt!\nJapanners houden er eigenzinnige eisen op na. Kom dus niet aanzetten met uw standaardproducten voor de Europese markt. Zo moet een producent van diepvriesfrieten bijvoorbeeld perfect identieke frietjes kunnen leveren in mini- verpakkingen. Het goede nieuws is dat Japanners voor kwaliteit graag diep in hun buidel tasten.\nEn u dacht dat Europa lijdt aan reglementitis? Japanners kennen er ook wat van. Tal van voorschriften zeggen wat je wel en niet mag doen. Gelukkig zijn de regels helder geformuleerd.\nHet gebruik van het Engels is niet echt ingeburgerd in Japan. Betrek een tolk bij uw onderhandelingen en zorg voor correcte vertalingen van handleidingen of softwareprogramma’s.'
}
```
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
### Data Configs
To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages.
For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following
the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the
naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS.
For ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed)
| config | train size (docs, words, download + preproc disk space) | validation size |
|:-------|--------------------------------------------------------:|----------------:|
| micro | 125k docs, 23M words (<1GB) | 16k docs |
| tiny | 6M docs, 2B words (6 GB + 15 GB) | 16k docs |
| small | 15M docs, 6B words (14 GB + 36 GB) | 16k docs |
| medium | 31M docs, 12B words (28 GB + 72 GB) | 32k docs |
| large | 47M docs, 19B words (42 GB + 108 GB) | 48k docs |
| full | 64M docs, 25B words (58 GB + 148 GB) | 64k docs |
For each config above there also exists a config `<name>_en_nl` that interleaves `nl` and `en` examples from the cleaned
`en` variant of C4.
You can load any config like this:
```python
from datasets import load_dataset
datasets = load_dataset('yhavinga/mc4_nl_cleaned', 'tiny', streaming=True)
print(datasets)
```
This will print
```
DatasetDict({
train: Dataset({
features: ['text', 'timestamp', 'url'],
num_rows: 6303893
})
validation: Dataset({
features: ['text', 'timestamp', 'url'],
num_rows: 16189
})
})
```
Since the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0:
```python
from datasets import load_dataset
mc4_nl_full_stream = load_dataset('yhavinga/mc4_nl_cleaned', "full", split='train', streaming=True)
print(next(iter(mc4_nl_full_stream))) # Prints the example presented above
```
## Dataset Creation
Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
## Considerations for Using the Data
### Social Impact of Dataset
With more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language.
The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 39GB in size for its deduplicated variant, and contains vulgarity.
Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language.
This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language.
### Discussion of Biases
Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will
inevitably reflect biases present in blog articles and comments on the Internet.
This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
## Additional Information
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com), [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for
providing the `cleaned_it_mc4` example that shows how upload a dataset to the Huggingface hub.
|
SetFit/amazon_reviews_multi_zh | 2022-03-23T15:30:49.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 60 | #amazon reviews multi chinese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Chinese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. |
SetFit/amazon_reviews_multi_fr | 2022-03-23T15:45:44.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 60 | #amazon reviews multi french
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the French language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. |
chainyo/rvl-cdip-invoice | 2022-04-06T16:57:20.000Z | [
"license:other",
"region:us"
] | chainyo | null | null | null | 3 | 60 | ---
license: other
---
⚠️ This only a subpart of the original dataset, containing only `invoice`.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@scs.ryerson.ca).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/. |
demelin/moral_stories | 2022-07-17T15:29:10.000Z | [
"task_categories:multiple-choice",
"task_categories:text-generation",
"task_categories:text-classification",
"task_ids:multiple-choice-qa",
"task_ids:language-modeling",
"task_ids:text-scoring",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
... | demelin | Moral Stories is a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented
social reasoning. For detailed information, see https://aclanthology.org/2021.emnlp-main.54.pdf. | @article{Emelin2021MoralSS,
title={Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences},
author={Denis Emelin and Ronan Le Bras and Jena D. Hwang and Maxwell Forbes and Yejin Choi},
journal={ArXiv},
year={2021},
volume={abs/2012.15738}
} | null | 10 | 60 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- crowdsourced
license:
- mit
multilinguality:
- monolingual
pretty_name: Moral Stories
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
- text-generation
- text-classification
- commonsense-reasoning
- moral-reasoning
- social-reasoning
task_ids:
- multiple-choice-qa
- language-modeling
- text-scoring
---
# Dataset Card for Moral Stories
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Moral Stories repository](https://github.com/demelin/moral_stories)
- **Repository:** [Moral Stories repository](https://github.com/demelin/moral_stories)
- **Paper:** [Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences](https://aclanthology.org/2021.emnlp-main.54/)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Denis Emelin](demelin.github.io)
### Dataset Summary
Moral Stories is a crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations, and their respective consequences. All stories in the dataset consist of seven sentences, belonging to the following categories:
- Norm: A guideline for social conduct generally observed by most people in everyday situations.
- Situation: Setting of the story that introduces story participants and describes their environment.
- Intention: Reasonable goal that one of the story participants (the actor), wants to fulfill.
- Normative action: An action by the actor that fulfills the intention and observes the norm.
- Normative consequence: Possible effect of the normative action on the actor's environment.
- Divergent action: An action by the actor that fulfills the intention and diverges from the norm.
- Divergent consequence: Possible effect of the divergent action on the actor's environment.
Accordingly, each story's constituent sentences can be grouped into three segments. The context segment grounds actions within a particular social scenario, the normative path contains the normative action and its consequence, whereas the divergent path includes their norm-divergent analogues. Combining the context segment separately with each path yields two self-contained sub-stories differing in the adherence of the described events to social expectations. See also [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Supported Tasks and Leaderboards
- commonsense-reasoning / social-reasoning / moral-reasoning: The dataset can also be used evaluate whether pretrained language models can reason about actions that are either consistent or inconsistent with social norms, the consequences of actions, and the norms that may motivate specific action choices. A [BART model](https://huggingface.co/facebook/bart-large) can be used for this purpose.
- text-classification: This dataset can be used to train models to differentiate between normative and divergent actions as well as between plausible and implausible consequences of actions. A [RoBERTa-based model](https://huggingface.co/roberta-base) can be used for this purpose.
- text-generation: The dataset can be used to train models to generate normative / divergent action hypotheses and their consequences, or norms that may explain certain action choices, conditioned on contexts of different scope. A [GPT-based model](https://huggingface.co/EleutherAI/gpt-neo-2.7B) can be used for this purpose.
### Languages
*Moral Stories* is available in English, with mainstream US Englishes being the dominant variety, as indicated by self-reported contributor demographics provided in the [*Ethical Considerations* section of the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Dataset Structure
### Data Instances
Due to its highly structured nature, *Moral Stories* enables a variety of tasks, such as action classification or norm generation. Furthermore, we provide different data splits in an attempt to challenge generalization abilities of the evaluated models. For details, refer to [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf). Here, we replicate one instance from the full, task-agnostic dataset, one instance from a classification task dataset (cls-action+norm), and one instance from a generation task dataset (gen-consequence$action+context):
"full" instance:
{"ID": "3OJSZ2ATDTUMG7NGR28931AHDQF57F",
"norm": "It's kind to wait for people.",
"situation": "Josh drives Penelope to work every day and is hanging outside her house to pick her up, but there's no sign of her.",
"intention": "Josh needs to get to work.",
"moral_action": "Josh continues to sit outside Penelope's house until she walks out the door a few minutes later.",
"moral_consequence": "Penelope appreciates Josh's patience and thanks him for waiting around.",
"immoral_action": "Josh shrugs his shoulders and drives away to work after sitting around a moment.",
"immoral_consequence": "Penelope comes running out of the house shouting to wait, but she's too late."}
positive and negative "cls-action+norm" instances (i.e. classify an action as normative or divergent given the action and a relevant norm):
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G1",
"norm": "It's bad to be too loud and wake people up.",
"moral_action": "Jordan quietly makes himself a bowl of cereal for breakfast.",
"label": "1"}
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G0",
"norm": "It's bad to be too loud and wake people up.",
"immoral_action": "Jordan noisily bangs a lot of pots and pans when making his breakfast.",
"label": "0"}
"gen-consequence$action+context" instance (i.e. generate either a plausible or implausible consequence given an action and its context):
{"ID": "33FOTY3KENJT46HTHWE8GXPE3LE1C03",
"norm": "It's mean to hit a bear with a guitar.",
"situation": "Joe was camping in Montana with his band when he came across a black bear.",
"intention": "Joe wanted to scare away the black bear.",
"immoral_action": "Joe grabbed his guitarist's guitar and hit the bear with it.",
"immoral_consequence": "The bear suffers a concussion and doesn't understand why he was hit.",
"label": "1"}
### Data Fields
- "ID": Unique identifier ID for this dataset instance.
- "norm": A guideline for social conduct generally observed by most people in everyday situations.
- "situation": Setting of the story that introduces story participants and describes their environment.
- "intention": Reasonable goal that one of the story participants (the actor), wants to fulfill.
- "moral_(i.e. 'normative')_action": An action by the actor that fulfills the intention and observes the norm.
- "moral_consequence": Possible effect of the normative action on the actor's environment.
- "immoral_(i.e. 'divergent')_action": An action by the actor that fulfills the intention and diverges from the norm.
- "immoral_consequence": Possible effect of the divergent action on the actor's environment.
- "label": Data instance label; for action-related tasks, "0" corresponds to an immoral / divergent action while "1" corresponds to a moral / normative action, for consequence-related tasks, "0" corresponds to a plausible consequence while "1" corresponds to an implausible consequence (for generation tasks, label is always set to "1")
### Data Splits
For classification tasks, we examined three data split strategies:
- *Norm Distance*: Norms are based on social consensus and may, as such, change across time and between locations. Therefore, we are also interested in how well classification models can generalize to novel norms. To estimate this, we split the dataset by embedding
norms found in the collected stories and grouping them into 1k clusters via agglomerative clustering. Clusters are ordered according to their degree of isolation, defined as the cosine distance between a cluster's centroid and the next-closest cluster's centroid. Stories with norms from most isolated clusters are assigned to test and development sets, with the rest forming the training set.
- *Lexical Bias*: Tests the susceptibility of classifiers to surface-level lexical correlations. We first identify 100 biased lemmas that occur most frequently either in normative or divergent actions. Each story is then assigned a bias score corresponding to the total number of biased lemmas present in both actions (or consequences). Starting with the lowest bias scores, stories are assigned to the test, development, and, lastly, training set.
- *Minimal Pairs*: Evaluates the model's ability to perform nuanced social reasoning. Splits are obtained by ordering stories according to the Damerau-Levenshtein distance between their actions (or consequences) and assigning stories with lowest distances to the test set, followed by the development set. The remainder makes up the training set.
For generation tasks, only the *Norm Distance* split strategy is used. For more details, refer to [*Section 3* and *Appendix C* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Dataset Creation
### Curation Rationale
Please refer to [*Section 2* and the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Source Data
#### Initial Data Collection and Normalization
Please refer to [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
#### Who are the source language producers?
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Annotations
#### Annotation process
Please refer to [*Section 2* and the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
#### Who are the annotators?
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Discussion of Biases
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Other Known Limitations
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Additional Information
### Dataset Curators
[Denis Emelin](demelin.github.io)
### Licensing Information
MIT
### Citation Information
@article{Emelin2021MoralSS,
title={Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences},
author={Denis Emelin and Ronan Le Bras and Jena D. Hwang and Maxwell Forbes and Yejin Choi},
journal={ArXiv},
year={2021},
volume={abs/2012.15738}
} |
GateNLP/broad_twitter_corpus | 2022-07-01T15:46:36.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | GateNLP | This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses.
The goal is to represent a broad range of activities, giving a dataset more representative of the language used
in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
For more details see [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/) | @inproceedings{derczynski2016broad,
title={Broad twitter corpus: A diverse named entity recognition resource},
author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian},
booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers},
pages={1169--1179},
year={2016}
} | null | 1 | 60 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: broad-twitter-corpus
pretty_name: Broad Twitter Corpus
---
# Dataset Card for broad_twitter_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Repository:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Paper:** [http://www.aclweb.org/anthology/C16-1111](http://www.aclweb.org/anthology/C16-1111)
- **Leaderboard:** [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
See the paper, [Broad Twitter Corpus: A Diverse Named Entity Recognition Resource](http://www.aclweb.org/anthology/C16-1111), for details.
### Supported Tasks and Leaderboards
* Named Entity Recognition
* On PWC: [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
### Languages
English from UK, US, Australia, Canada, Ireland, New Zealand; `bcp47:en`
## Dataset Structure
### Data Instances
Feature |Count
---|---:
Documents |9 551
Tokens |165 739
Person entities |5 271
Location entities |3 114
Organization entities |3 732
### Data Fields
Each tweet contains an ID, a list of tokens, and a list of NER tags
- `id`: a `string` feature.
- `tokens`: a `list` of `strings`
- `ner_tags`: a `list` of class IDs (`int`s) representing the NER class:
```
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
```
### Data Splits
Section|Region|Collection period|Description|Annotators|Tweet count
---|---|---|---|---|---:
A | UK| 2012.01| General collection |Expert| 1000
B |UK |2012.01-02 |Non-directed tweets |Expert |2000
E |Global| 2014.07| Related to MH17 disaster| Crowd & expert |200
F |Stratified |2009-2014| Twitterati |Crowd & expert |2000
G |Stratified| 2011-2014| Mainstream news| Crowd & expert| 2351
H |Non-UK| 2014 |General collection |Crowd & expert |2000
The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived.
**Test**: Section F
**Development**: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance)
**Training**: everything else
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
### Citation Information
```
@inproceedings{derczynski2016broad,
title={Broad twitter corpus: A diverse named entity recognition resource},
author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian},
booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers},
pages={1169--1179},
year={2016}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
hugginglearners/anime-quotes | 2022-08-18T03:54:12.000Z | [
"region:us"
] | hugginglearners | null | null | null | 2 | 60 | Entry not found |
zyznull/msmarco-passage-ranking | 2022-09-28T03:30:10.000Z | [
"license:apache-2.0",
"region:us"
] | zyznull | null | @misc{bajaj2018ms,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu
and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song
and Alina Stoica and Saurabh Tiwary and Tong Wang},
year={2018},
eprint={1611.09268},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 60 | ---
license: apache-2.0
---
|
shivi/cheques_sample_data | 2022-11-05T21:31:01.000Z | [
"region:us"
] | shivi | null | null | null | 0 | 60 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 7518544.0
num_examples: 400
- name: train
num_bytes: 56481039.4
num_examples: 2800
- name: validation
num_bytes: 15034990.0
num_examples: 800
download_size: 58863727
dataset_size: 79034573.4
---
# Dataset Card for "cheques_sample_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
WillHeld/mtop | 2022-12-10T17:50:10.000Z | [
"region:us"
] | WillHeld | null | null | null | 0 | 60 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: ' intent'
dtype: string
- name: ' slot'
dtype: string
- name: ' utterance'
dtype: string
- name: ' domain'
dtype: string
- name: ' locale'
dtype: string
- name: ' dcp_form'
dtype: string
- name: ' tokens'
dtype: string
- name: intent
dtype: string
- name: slot
dtype: string
- name: utterance
dtype: string
- name: domain
dtype: string
- name: locale
dtype: string
- name: dcp_form
dtype: string
- name: tokens
dtype: string
splits:
- name: eval_en
num_bytes: 2077234
num_examples: 2235
- name: test_en
num_bytes: 4090856
num_examples: 4386
- name: train_en
num_bytes: 14501480
num_examples: 15667
- name: eval_de
num_bytes: 1764320
num_examples: 1815
- name: test_de
num_bytes: 3439946
num_examples: 3549
- name: train_de
num_bytes: 13122042
num_examples: 13424
- name: eval_es
num_bytes: 1594238
num_examples: 1527
- name: test_es
num_bytes: 3089782
num_examples: 2998
- name: train_es
num_bytes: 11277514
num_examples: 10934
- name: eval_fr
num_bytes: 1607082
num_examples: 1577
- name: test_fr
num_bytes: 3289276
num_examples: 3193
- name: train_fr
num_bytes: 12147836
num_examples: 11814
- name: eval_hi
num_bytes: 2618172
num_examples: 2012
- name: test_hi
num_bytes: 3491690
num_examples: 2789
- name: train_hi
num_bytes: 14225324
num_examples: 11330
- name: eval_th
num_bytes: 2251378
num_examples: 1671
- name: test_th
num_bytes: 3654864
num_examples: 2765
- name: train_th
num_bytes: 14277512
num_examples: 10759
download_size: 16165451
dataset_size: 112520546
---
# Dataset Card for "mtop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
b-mc2/wikihow_lists | 2023-01-27T00:50:59.000Z | [
"task_categories:summarization",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-3.0",
"lists",
"bullets",
"steps",
"summary",
"region:us"
] | b-mc2 | null | null | null | 4 | 60 | ---
license: cc-by-nc-sa-3.0
task_categories:
- summarization
- question-answering
language:
- en
tags:
- lists
- bullets
- steps
- summary
pretty_name: wikihow_lists
size_categories:
- 10K<n<100K
---
# Dataset Card for WikiHow Lists
### Dataset Summary
Contains CSV of a subset of WikiHow articles.
Subsets include articles that have summaries in numbered list format, unordered list of ingredients, or unordered list of items needed for the article.
CSV contains a pageId to reference back to the source, title of the article, result with the list data, and a column specifying the result type (ingredient, needed items, summary)
### Licensing Information
Data is from WikiHow, license for content is located here
https://www.wikihow.com/wikiHow:Creative-Commons |
yuyang/bart_cnndm | 2023-05-08T22:12:43.000Z | [
"region:us"
] | yuyang | CNN/DailyMail non-anonymized summarization dataset.
There are two features:
- article: text of news article, used as the document to be summarized
- highlights: joined text of highlights with <s> and </s> around each
highlight, which is the target summary | @article{DBLP:journals/corr/SeeLM17,
author = {Abigail See and
Peter J. Liu and
Christopher D. Manning},
title = {Get To The Point: Summarization with Pointer-Generator Networks},
journal = {CoRR},
volume = {abs/1704.04368},
year = {2017},
url = {http://arxiv.org/abs/1704.04368},
archivePrefix = {arXiv},
eprint = {1704.04368},
timestamp = {Mon, 13 Aug 2018 16:46:08 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/SeeLM17},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{hermann2015teaching,
title={Teaching machines to read and comprehend},
author={Hermann, Karl Moritz and Kocisky, Tomas and Grefenstette, Edward and Espeholt, Lasse and Kay, Will and Suleyman, Mustafa and Blunsom, Phil},
booktitle={Advances in neural information processing systems},
pages={1693--1701},
year={2015}
} | null | 0 | 60 | Modification of the cnn_dailymail dataset in Hugging Face. The main goal is to reproduce the results on BART.
References: https://github.com/facebookresearch/fairseq/issues/1401
Major changes:
1. remove the space in " ." in fix_missing_period.
2. remove "(CNN)" in article. |
bbz662bbz/databricks-dolly-15k-ja-gozaru | 2023-05-29T12:58:37.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | bbz662bbz | null | null | null | 1 | 60 | ---
license: cc-by-sa-3.0
---
This dataset was using "kunishou/databricks-dolly-15k-ja"
This dataset is licensed under CC BY SA 3.0
Last Update : 2023-05-28
databricks-dolly-15k-ja-gozaru
kunishou/databricks-dolly-15k-ja
https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
|
clarin-knext/dbpedia-pl | 2023-06-07T08:12:53.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 2 | 60 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
radia/wmt14-de2en | 2023-06-24T21:18:45.000Z | [
"region:us"
] | radia | null | null | null | 0 | 60 | ---
dataset_info:
features:
- name: de
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 1332850167
num_examples: 4468840
- name: val
num_bytes: 1588612
num_examples: 6003
- name: test
num_bytes: 715833
num_examples: 2737
download_size: 822597852
dataset_size: 1335154612
---
# Dataset Card for "wmt14-de2en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sharmaarushi17/HPCPerfOpt-Open-ended | 2023-09-05T15:55:59.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"license:openrail",
"code",
"region:us"
] | sharmaarushi17 | null | null | null | 0 | 60 | ---
license: openrail
pretty_name: HPCPerfOpt (HPC Performance Optimization Benchmark)
configs:
- config_name: text
data_files:
- split: test
path: "text.csv"
- config_name: code
data_files:
- split: test
path: "code.csv"
task_categories:
- question-answering
tags:
- code
size_categories:
- n<1K
---
# Dataset Card for HPCPerfOpt (HPC Performance Optimization Dataset)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a question answering dataset for OpenMP Performance Optimization questions. It contains open-ended questions of 2 types:
1. What is the performance issue in the given code snippet? - Text answers
2. Please generate the optimized version of the given OpenMP code for better performance. - Code answers
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
C-MTEB/PAWSX | 2023-07-28T13:43:08.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 60 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: int32
splits:
- name: train
num_bytes: 10420251
num_examples: 49401
- name: validation
num_bytes: 457128
num_examples: 2000
- name: test
num_bytes: 458674
num_examples: 2000
download_size: 8881168
dataset_size: 11336053
---
# Dataset Card for "PAWSX"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/scitldr | 2023-08-31T19:47:53.000Z | [
"region:us"
] | dim | null | null | null | 0 | 60 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 4016919
num_examples: 3229
download_size: 2222180
dataset_size: 4016919
---
# Dataset Card for "scitldr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/linux_man_pages_tldr_summarized | 2023-08-31T19:56:32.000Z | [
"region:us"
] | dim | null | null | null | 0 | 60 | ---
dataset_info:
features:
- name: Command
dtype: string
- name: Text
dtype: string
- name: Summary
dtype: string
splits:
- name: train
num_bytes: 3006835
num_examples: 481
download_size: 1308915
dataset_size: 3006835
---
# Dataset Card for "linux_man_pages_tldr_summarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mirfan899/hindi-ner | 2023-09-19T06:19:28.000Z | [
"region:us"
] | mirfan899 | null | null | null | 0 | 60 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': LOCATION
'1': BRAND
'2': TITLE_OBJECT
'3': PERSON
'4': DESIGNATION
'5': ORGANIZATION
'6': ABBREVIATION
'7': TIME
'8': NUMBER
'9': MEASURE
'10': TERMS
'11': O
splits:
- name: train
num_bytes: 230700924
num_examples: 383127
- name: validation
num_bytes: 98919407
num_examples: 164198
- name: test
num_bytes: 98919407
num_examples: 164198
download_size: 77712066
dataset_size: 428539738
---
# Dataset Card for "hindi-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bobbybelajar/Llama2SummaryPlusSentiment | 2023-09-30T06:06:11.000Z | [
"region:us"
] | bobbybelajar | null | null | null | 0 | 60 | Entry not found |
gap | 2023-04-05T10:06:30.000Z | [
"task_categories:token-classification",
"task_ids:coreference-resolution",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:1810.05201",
"region:us"
] | null | GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of
(ambiguous pronoun, antecedent name), sampled from Wikipedia and released by
Google AI Language for the evaluation of coreference resolution in practical
applications. | @article{DBLP:journals/corr/abs-1810-05201,
author = {Kellie Webster and
Marta Recasens and
Vera Axelrod and
Jason Baldridge},
title = {Mind the {GAP:} {A} Balanced Corpus of Gendered Ambiguous Pronouns},
journal = {CoRR},
volume = {abs/1810.05201},
year = {2018},
url = {http://arxiv.org/abs/1810.05201},
archivePrefix = {arXiv},
eprint = {1810.05201},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1810-05201},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 2 | 59 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: GAP Benchmark Suite
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- coreference-resolution
paperswithcode_id: gap
dataset_info:
features:
- name: ID
dtype: string
- name: Text
dtype: string
- name: Pronoun
dtype: string
- name: Pronoun-offset
dtype: int32
- name: A
dtype: string
- name: A-offset
dtype: int32
- name: A-coref
dtype: bool
- name: B
dtype: string
- name: B-offset
dtype: int32
- name: B-coref
dtype: bool
- name: URL
dtype: string
splits:
- name: train
num_bytes: 1095623
num_examples: 2000
- name: validation
num_bytes: 248329
num_examples: 454
- name: test
num_bytes: 1090462
num_examples: 2000
download_size: 2401971
dataset_size: 2434414
---
# Dataset Card for "gap"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/gap-coreference](https://github.com/google-research-datasets/gap-coreference)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns](https://arxiv.org/abs/1810.05201)
- **Point of Contact:** [gap-coreference@google.com](mailto:gap-coreference@google.com)
- **Size of downloaded dataset files:** 2.40 MB
- **Size of the generated dataset:** 2.43 MB
- **Total amount of disk used:** 4.83 MB
### Dataset Summary
GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of
(ambiguous pronoun, antecedent name), sampled from Wikipedia and released by
Google AI Language for the evaluation of coreference resolution in practical
applications.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.40 MB
- **Size of the generated dataset:** 2.43 MB
- **Total amount of disk used:** 4.83 MB
An example of 'validation' looks as follows.
```
{
"A": "aliquam ultrices sagittis",
"A-coref": false,
"A-offset": 208,
"B": "elementum curabitur vitae",
"B-coref": false,
"B-offset": 435,
"ID": "validation-1",
"Pronoun": "condimentum mattis pellentesque",
"Pronoun-offset": 948,
"Text": "Lorem ipsum dolor",
"URL": "sem fringilla ut"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `ID`: a `string` feature.
- `Text`: a `string` feature.
- `Pronoun`: a `string` feature.
- `Pronoun-offset`: a `int32` feature.
- `A`: a `string` feature.
- `A-offset`: a `int32` feature.
- `A-coref`: a `bool` feature.
- `B`: a `string` feature.
- `B-offset`: a `int32` feature.
- `B-coref`: a `bool` feature.
- `URL`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 2000| 454|2000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{webster-etal-2018-mind,
title = "Mind the {GAP}: A Balanced Corpus of Gendered Ambiguous Pronouns",
author = "Webster, Kellie and
Recasens, Marta and
Axelrod, Vera and
Baldridge, Jason",
journal = "Transactions of the Association for Computational Linguistics",
volume = "6",
year = "2018",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q18-1042",
doi = "10.1162/tacl_a_00240",
pages = "605--617",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset. |
metaeval/recast | 2023-06-02T14:40:17.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"nli",
"natural-lan... | metaeval | A diverse collection of tasks recasted as natural language inference tasks. | null | null | 0 | 59 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'recast_nli'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
tags:
- nli
- natural-language-inference
---
http://decomp.io/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.