id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
syaoran312/VHAC_QA | syaoran312 | 2023-10-13T17:56:39Z | 79 | 0 | null | [
"region:us"
] | 2023-10-13T17:56:39Z | 2023-10-13T17:56:06.000Z | 2023-10-13T17:56:06 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aman279/Multiple_Session_Chat_test | Aman279 | 2023-10-30T21:30:29Z | 79 | 0 | null | [
"region:us"
] | 2023-10-30T21:30:29Z | 2023-10-30T16:07:43.000Z | 2023-10-30T16:07:43 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/ccnews_title_text_1107 | multi-train | 2023-11-10T18:37:51Z | 79 | 0 | null | [
"region:us"
] | 2023-11-10T18:37:51Z | 2023-11-10T18:36:58.000Z | 2023-11-10T18:36:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 853741640
num_examples: 200000
download_size: 482727447
dataset_size: 853741640
---
# Dataset Card for "ccnews_title_text_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2940572500228882,
-0.19020456075668335,
0.38844168186187744,
0.4324030578136444,
-0.4065224230289459,
0.046435222029685974,
-0.008903280831873417,
-0.025403017178177834,
0.7872346043586731,
0.6238093376159668,
-0.7060080170631409,
-0.8464412689208984,
-0.6754844784736633,
-0.09093102067... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AudioDecBenchmark/librispeech_asr_test | AudioDecBenchmark | 2023-11-18T10:51:16Z | 79 | 0 | null | [
"region:us"
] | 2023-11-18T10:51:16Z | 2023-11-15T12:47:30.000Z | 2023-11-15T12:47:30 | ---
configs:
- config_name: default
data_files:
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k
path: data/encodec_24k-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: unit
sequence:
sequence: int64
splits:
- name: academicodec_hifi_16k_320d
num_bytes: 31486926
num_examples: 2620
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 31486926
num_examples: 2620
- name: academicodec_hifi_24k_320d
num_bytes: 47043726
num_examples: 2620
- name: audiodec_24k_320d
num_bytes: 100100814
num_examples: 2620
- name: dac_16k
num_bytes: 194813390
num_examples: 2620
- name: dac_24k
num_bytes: 548721230
num_examples: 2620
- name: dac_44k
num_bytes: 163255358
num_examples: 2620
- name: encodec_24k
num_bytes: 23734398
num_examples: 2620
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 250099790
num_examples: 2620
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 250099790
num_examples: 2620
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 249923918
num_examples: 2620
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 125470798
num_examples: 2620
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 249923918
num_examples: 2620
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 249923918
num_examples: 2620
- name: speech_tokenizer_16k
num_bytes: 62745230
num_examples: 2620
download_size: 406443893
dataset_size: 2578830130
---
# Dataset Card for "librispeech_asr_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6662995219230652,
-0.2509519159793854,
-0.07689271122217178,
0.20222070813179016,
-0.11444766819477081,
-0.08131115138530731,
0.17899370193481445,
-0.18029196560382843,
0.7526699900627136,
0.336893230676651,
-0.7354253530502319,
-0.5083706974983215,
-0.4981008470058441,
-0.3402175605297... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
persiannlp/parsinlu_translation_en_fa | persiannlp | 2022-10-24T16:50:37Z | 78 | 1 | null | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:fa",
"multilinguality:en",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:2012.06154",
"region:us"
] | 2022-10-24T16:50:37Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- fa
- en
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# Dataset Card for PersiNLU (Machine Translation)
## Table of Contents
- [Dataset Card for PersiNLU (Machine Translation)](#dataset-card-for-persi_nlu_machine_translation)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian translation dataset (English -> Persian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`) and English (`en`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"source": "how toil to raise funds, propagate reforms, initiate institutions!",
"targets": ["چه زحمتها که بکشد تا منابع مالی را تامین کند اصطلاحات را ترویج کند نهادهایی به راه اندازد."],
"category": "mizan_dev_en_fa"
}
```
### Data Fields
- `source`: the input sentences, in English.
- `targets`: the list of gold target translations in Persian.
- `category`: the source from which the dataset is mined.
### Data Splits
The train/de/test split contains 1,621,666/2,138/48,360 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
| [
-0.3947049081325531,
-0.6276271343231201,
0.12968163192272186,
0.27682816982269287,
-0.3021584749221802,
0.038637254387140274,
-0.6303620338439941,
-0.08842294663190842,
0.3883608281612396,
0.4571748673915863,
-0.6689337491989136,
-0.992798924446106,
-0.7459494471549988,
0.4243751168251037... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sia-precision-education/pile_python | sia-precision-education | 2022-01-25T01:24:47Z | 78 | 3 | null | [
"region:us"
] | 2022-01-25T01:24:47Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Team-PIXEL/rendered-wikipedia-english | Team-PIXEL | 2022-08-02T14:01:21Z | 78 | 2 | null | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"license:gfdl",
"arxiv:2207.06991",
"region:us"
] | 2022-08-02T14:01:21Z | 2022-05-11T14:52:06.000Z | 2022-05-11T14:52:06 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- monolingual
pretty_name: Team-PIXEL/rendered-wikipedia-english
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- masked-auto-encoding
- rendered-language-modelling
task_ids:
- masked-auto-encoding
- rendered-language-modeling
paperswithcode_id: null
---
# Dataset Card for Team-PIXEL/rendered-wikipedia-english
## Dataset Description
- **Homepage:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Repository:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Paper:** [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991)
- **Point of Contact:** [Phillip Rust](mailto:p.rust@di.ku.dk)
- **Size of downloaded dataset files:** 125.66 GB
- **Size of the generated dataset:** 125.56 GB
- **Total amount of disk used:** 251.22 GB
### Dataset Summary
This dataset contains the full English Wikipedia from February 1, 2018, rendered into images of 16x8464 resolution.
The original text dataset was built from a [Wikipedia dump](https://dumps.wikimedia.org/). Each example in the original *text* dataset contained the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.). Each *rendered* example contains a subset of one full article. This rendered English Wikipedia was used to train the [PIXEL](https://huggingface.co/Team-PIXEL/pixel-base) model introduced in the paper [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) by Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, and Desmond Elliott.
The original Wikipedia text dataset was rendered article-by-article into 11.4M examples containing approximately 2B words in total. The dataset is stored as a collection of 338 parquet files.
It was rendered using the script openly available at [https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_wikipedia.py](https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_wikipedia.py). The text renderer uses a PyGame backend and a collection of merged Google Noto Sans fonts. The PyGame backend does not support complex text layouts (e.g. ligatures and right-to-left scripts) or emoji, so occurrences of such text in the Wikipedia data have not been rendered accurately.
Each example consists of a "pixel_values" field which stores a 16x8464 (height, width) grayscale image containing the rendered text, and an integer value "num_patches" which stores how many image patches (when splitting the image into 529 non-overlapping patches of resolution 16x16 pixels) in the associated images contain actual text, i.e. are neither blank (fully white) nor are the fully black end-of-sequence patch.
You can load the dataset as follows:
```python
from datasets import load_dataset
# Download the full dataset to disk
load_dataset("Team-PIXEL/rendered-wikipedia-english", split="train")
# Stream the dataset directly from the hub
load_dataset("Team-PIXEL/rendered-wikipedia-english", split="train", streaming=True)
```
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 125.66 GB
- **Size of the generated dataset:** 125.56 GB
- **Total amount of disk used:** 251.22 GB
An example of 'train' looks as follows.
```
{
"pixel_values": <PIL.PngImagePlugin.PngImageFile image mode=L size=8464x16
"num_patches": "469"
}
```
### Data Fields
The data fields are the same among all splits.
- `pixel_values`: an `Image` feature.
- `num_patches`: a `Value(dtype="int64")` feature.
### Data Splits
|train|
|:----|
|11446535|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA) and the GNU Free Documentation License (GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes the text.
### Citation Information
```bibtex
@article{rust-etal-2022-pixel,
title={Language Modelling with Pixels},
author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott},
journal={arXiv preprint},
year={2022},
url={https://arxiv.org/abs/2207.06991}
}
```
### Contact Person
This dataset was added by Phillip Rust.
Github: [@xplip](https://github.com/xplip)
Twitter: [@rust_phillip](https://twitter.com/rust_phillip) | [
-0.7004536390304565,
-0.6671557426452637,
0.11848518997430801,
0.03563408553600311,
-0.2573820650577545,
-0.09599564224481583,
-0.45914509892463684,
-0.517002284526825,
0.48640337586402893,
0.2311282455921173,
-0.631515622138977,
-0.7116259932518005,
-0.4766847789287567,
0.0870963111519813... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlp-guild/intent-recognition-biomedical | nlp-guild | 2022-09-22T16:13:44Z | 78 | 1 | null | [
"license:mit",
"region:us"
] | 2022-09-22T16:13:44Z | 2022-09-22T16:10:30.000Z | 2022-09-22T16:10:30 | ---
license: mit
---
[source](https://github.com/wangle1218/KBQA-for-Diagnosis/tree/main/nlu/bert_intent_recognition/data) | [
0.026299631223082542,
-0.6413291692733765,
0.7601169347763062,
0.24570493400096893,
-0.1357315480709076,
-0.4658263325691223,
0.0208679661154747,
-0.5366176962852478,
0.3380131721496582,
0.22647476196289062,
-0.8024703860282898,
-0.5860472917556763,
-0.1502838283777237,
-0.2753130495548248... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gbharti/wealth-alpaca_lora | gbharti | 2023-04-07T09:12:57Z | 78 | 12 | null | [
"language:en",
"region:us"
] | 2023-04-07T09:12:57Z | 2023-04-03T02:17:27.000Z | 2023-04-03T02:17:27 | ---
language:
- en
---
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5
Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora | [
-0.5171543955802917,
-0.7889664769172668,
0.1142718493938446,
0.09520992636680603,
-0.2963547706604004,
-0.021375130861997604,
0.22444404661655426,
-0.5989115238189697,
0.7208644151687622,
0.627961277961731,
-0.6234514117240906,
-0.5901315212249756,
-0.4211452901363373,
-0.3589833080768585... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
diffusers/dog-example | diffusers | 2023-04-18T15:53:56Z | 78 | 5 | null | [
"region:us"
] | 2023-04-18T15:53:56Z | 2023-04-18T15:53:06.000Z | 2023-04-18T15:53:06 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bprateek/amazon_product_description | bprateek | 2023-05-17T20:12:35Z | 78 | 2 | null | [
"license:apache-2.0",
"region:us"
] | 2023-05-17T20:12:35Z | 2023-04-23T13:43:41.000Z | 2023-04-23T13:43:41 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
abobster/pushkin_new | abobster | 2023-05-05T16:31:35Z | 78 | 0 | null | [
"region:us"
] | 2023-05-05T16:31:35Z | 2023-05-05T16:31:11.000Z | 2023-05-05T16:31:11 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pavithrav/testing2 | pavithrav | 2023-08-30T08:38:33Z | 78 | 0 | null | [
"region:us"
] | 2023-08-30T08:38:33Z | 2023-08-30T08:37:44.000Z | 2023-08-30T08:37:44 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/indolem_ntp | SEACrowd | 2023-09-26T12:30:22Z | 78 | 0 | null | [
"language:ind",
"license:cc-by-4.0",
"next-sentence-prediction",
"arxiv:2011.00677",
"region:us"
] | 2023-09-26T12:30:22Z | 2023-09-26T11:11:20.000Z | 2023-09-26T11:11:20 | ---
license: cc-by-4.0
tags:
- next-sentence-prediction
language:
- ind
---
# indolem_ntp
NTP (Next Tweet prediction) is one of the comprehensive Indonesian benchmarks that given a list of tweets and an option, we predict if the option is the next tweet or not.
This task is similar to the next sentence prediction (NSP) task used to train BERT (Devlin et al., 2019).
In NTP, each instance consists of a Twitter thread (containing 2 to 4 tweets) that we call the premise, and four possible options for the next tweet, one of which is the actual response from the original thread.
Train: 5681 threads
Development: 811 threads
Test: 1890 threads
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{DBLP:journals/corr/abs-2011-00677,
author = {Fajri Koto and
Afshin Rahimi and
Jey Han Lau and
Timothy Baldwin},
title = {IndoLEM and IndoBERT: {A} Benchmark Dataset and Pre-trained Language
Model for Indonesian {NLP}},
journal = {CoRR},
volume = {abs/2011.00677},
year = {2020},
url = {https://arxiv.org/abs/2011.00677},
eprinttype = {arXiv},
eprint = {2011.00677},
timestamp = {Fri, 06 Nov 2020 15:32:47 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2011-00677.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## License
Creative Commons Attribution 4.0
## Homepage
[https://indolem.github.io/](https://indolem.github.io/)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.4032706022262573,
-0.42774540185928345,
-0.00011049102613469586,
0.5200394988059998,
-0.4395858347415924,
0.05873996764421463,
-0.19313101470470428,
-0.8005737066268921,
0.26946401596069336,
0.42304033041000366,
-0.49950093030929565,
-0.3862529993057251,
-0.5147331953048706,
0.616285562... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
erhwenkuo/medical_dialogue-chinese-zhtw | erhwenkuo | 2023-09-27T23:06:14Z | 78 | 2 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:mit",
"region:us"
] | 2023-09-27T23:06:14Z | 2023-09-27T22:27:35.000Z | 2023-09-27T22:27:35 | ---
language:
- zh
license: mit
size_categories:
- 100K<n<1M
task_categories:
- text-generation
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 553726613
num_examples: 799743
download_size: 288686981
dataset_size: 553726613
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "medical_dialogue-chinese-zhtw"
中文醫療問答資料集
## 來源
本資料集是從 [Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data) 的 github repo 中轉換而來。
## 內容
|科別|數量|
|---|---|
**Andriatria 男科** |94,596 個問答對|
**IM 內科** |220,606 個問答對|
**OAGD 婦產科** |183,751 個問答對|
**Oncology 腫瘤科** |75,553 個問答對|
**Pediatric 兒科** |101,602 個問答對|
**Surgical 外科** |115,991 個問答對|
總計 792,099 條數據
## 範例
```json
{
"instruction": "現在你是個神經腦外科醫生,請根據病人的問題給予建議:",
"input": "癲癇病能吃德巴金嗎,錯覺,有時候感覺看到的和聽到的不太一樣。",
"output": "巴金是廣譜抗病藥物,主要作用於中樞神經系統,對動物的藥理研究發現德巴金對各種癲癇的實驗模型(全身性和局部性)均有抗驚厥作用,對人的各種類型癲癇發作有抑製作用,作用機理可能與增加γ-氨基丁酸的濃度有關。主要是治癲癇藥物。建議在醫生的知道下,用藥,祝您身體早日康復。"
}
```
## 欄位:
```
instruction: 指令
input: 輸入
output: 輸出
```
## 使用限制
此資料集用於研究大型語言模型的目的,不得用於會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。 | [
-0.23509229719638824,
-0.5891331434249878,
0.3884643018245697,
0.14159338176250458,
-0.5870157480239868,
-0.20830178260803223,
-0.1754353642463684,
-0.14466305077075958,
0.32534855604171753,
0.6040660738945007,
-0.8019924163818359,
-1.1987662315368652,
-0.6333080530166626,
-0.0102769490331... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yashnbx/iamgroot-2 | yashnbx | 2023-09-29T19:18:59Z | 78 | 0 | null | [
"region:us"
] | 2023-09-29T19:18:59Z | 2023-09-29T19:17:24.000Z | 2023-09-29T19:17:24 | test dataset | [
-0.4610620439052582,
-0.46676865220069885,
-0.3216044008731842,
0.09809492528438568,
-0.01660213991999626,
-0.41339367628097534,
-0.015327304601669312,
0.2377195656299591,
-0.07107837498188019,
0.780576229095459,
-0.6895266771316528,
-0.21844765543937683,
-0.22520288825035095,
0.2675228714... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ProlificAI/social-reasoning-rlhf | ProlificAI | 2023-10-11T08:50:59Z | 78 | 4 | null | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"human-feedback",
"rlhf",
"region:us"
] | 2023-10-11T08:50:59Z | 2023-10-10T23:45:21.000Z | 2023-10-10T23:45:21 | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: Social Reasoning RLHF
size_categories:
- 1K<n<10K
tags:
- human-feedback
- rlhf
---
## Dataset Summary
This repository provides access to a social reasoning dataset that aims to provide signal to how humans navigate social situations, how they reason about them and how they understand each other. It contains questions probing people's thinking and understanding of various social situations.
This dataset was created by collating a set of questions within the following social reasoning tasks:
* understanding of emotions
* intent recognition
* social norms
* social responsibility
* reading of social cues
* perspective taking
* conflict resolution
* ethics
* moral judgement
* communication skills
* negotiation strategies
* understanding of empathy
* understanding of compassion
* understanding of trust
* understanding and use of humour
* showing kindness
* navigating diversity and cultural differences
* use of figurative language
* self-awareness
We asked a group of participants to provide their responses to the given questions, then we asked another group of participants to rate their responses in a pairwise comparison setting. The format of the dataset is as following:
```json
{
"question": "Question",
"chosen": "The chosen response",
"rejected": "The rejected response"
}
```
## Disclaimer
The guidelines encouraged participants to provide respectful, empathetic and inclusive responses, however the dataset may still contain responses that some may find offensive or upsetting.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("ProlificAI/social-reasoning-rlhf")
```
## About Prolific
Robust AI is built on high-quality human data. [Prolific](https://www.prolific.com/) makes it easy to get honest, accurate feedback on your models, from our balanced and vetted pool of taskers.
### Contact
Got any questions? Email ai@prolific.co | [
-0.3450089395046234,
-0.7605215311050415,
0.3946160674095154,
0.3537626564502716,
-0.31120601296424866,
0.07142949104309082,
-0.17618626356124878,
-0.3827100992202759,
0.20924456417560577,
0.5025009512901306,
-0.637633740901947,
-0.6348316669464111,
-0.5464802980422974,
0.1551477611064911,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexandrainst/wiki40b-da | alexandrainst | 2023-10-27T19:08:09Z | 78 | 0 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:da",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-10-27T19:08:09Z | 2023-10-27T18:47:11.000Z | 2023-10-27T18:47:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 220855898
num_examples: 109486
- name: validation
num_bytes: 12416304
num_examples: 6173
- name: test
num_bytes: 12818380
num_examples: 6219
download_size: 150569852
dataset_size: 246090582
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- da
pretty_name: Wiki40b-da
size_categories:
- 100K<n<1M
---
# Dataset Card for "wiki40b-da"
## Dataset Description
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 150.57 MB
- **Size of the generated dataset:** 246.09 MB
- **Total amount of disk used:** 396.66 MB
### Dataset Summary
This dataset is an upload of the Danish part of the [Wiki40b dataset](https://aclanthology.org/2020.lrec-1.297), being a cleaned version of a dump of Wikipedia.
The dataset is identical in content to [this dataset on the Hugging Face Hub](https://huggingface.co/datasets/wiki40b), but that one requires both `apache_beam`, `tensorflow` and `mwparserfromhell`, which can lead to dependency issues since these are not compatible with several newer packages.
The training, validation and test splits are the original ones.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 150.57 MB
- **Size of the generated dataset:** 246.09 MB
- **Total amount of disk used:** 396.66 MB
An example from the dataset looks as follows.
```
{
'wikidata_id': 'Q17341862',
'text': "\n_START_ARTICLE_\nÆgyptiske tekstiler\n_START_PARAGRAPH_\nTekstiler havde mange (...)",
'version_id': '9018011197452276273'
}
```
### Data Fields
The data fields are the same among all splits.
- `wikidata_id`: a `string` feature.
- `text`: a `string` feature.
- `version_id`: a `string` feature.
### Dataset Statistics
There are 109,486 samples in the training split, 6,173 samples in the validation split and 6,219 in the test split.
#### Document Length Distribution

## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) uploaded it to the Hugging Face Hub.
### Licensing Information
The dataset is licensed under the [CC-BY-SA
license](https://creativecommons.org/licenses/by-sa/4.0/). | [
-0.9023063778877258,
-0.5351514220237732,
0.019759366288781166,
0.23870337009429932,
-0.2314666360616684,
-0.2648005783557892,
-0.2932635247707367,
-0.5401090979576111,
0.4872399866580963,
0.26955533027648926,
-0.8023917078971863,
-0.6654422879219055,
-0.5326340794563293,
0.288154691457748... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/fm-updates-alpaca-7b | coastalcph | 2023-11-25T21:49:37Z | 78 | 0 | null | [
"region:us"
] | 2023-11-25T21:49:37Z | 2023-11-08T13:29:51.000Z | 2023-11-08T13:29:51 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: query
struct:
- name: label
dtype: string
- name: objects
list:
- name: aliases
sequence: string
- name: label
dtype: string
- name: qid
dtype: string
- name: qid
dtype: string
- name: rel_id
dtype: string
- name: relation
dtype: string
- name: prediction
struct:
- name: predictions
list:
- name: answer
dtype: string
- name: first_token_probability
dtype: float64
- name: per_token_probability
sequence: float64
- name: perplexity
dtype: float64
- name: query
dtype: string
- name: f1
dtype: float64
- name: relation
dtype: string
- name: type
dtype: string
- name: original_answer
dtype: string
- name: updates
sequence: string
splits:
- name: test
num_bytes: 191644.60720614294
num_examples: 492
- name: validation
num_bytes: 46742.5871234495
num_examples: 120
download_size: 181290
dataset_size: 238387.19432959246
---
# Dataset Card for "fm-updates-alpaca-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6885966658592224,
-0.3125612139701843,
0.26054492592811584,
0.5438785552978516,
-0.4621789753437042,
-0.18119210004806519,
0.3831740915775299,
-0.38073402643203735,
0.8933089375495911,
0.4666356146335602,
-0.9576447606086731,
-0.8020948171615601,
-0.7794142961502075,
-0.1325364410877227... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
myvision/gender-classification | myvision | 2022-04-26T17:46:55Z | 77 | 0 | null | [
"region:us"
] | 2022-04-26T17:46:55Z | 2022-04-25T15:31:04.000Z | 2022-04-25T15:31:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MLRS/korpus_malti | MLRS | 2022-08-30T08:59:09Z | 77 | 0 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:mt"... | 2022-08-30T08:59:09Z | 2022-05-11T12:47:44.000Z | 2022-05-11T12:47:44 | ---
pretty_name: Korpus Malti
language:
- mt
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
annotations_creators:
- no-annotation
language_creators:
- found
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
license:
- cc-by-nc-sa-4.0
---
# Korpus Malti 🇲🇹
General Corpora for the Maltese Language.
This dataset is composed of texts from various genres/domains written in Maltese.
## Configurations
### Shuffled data
The default configuration (`"shuffled"`) yields the the entire corpus from all genres:
```python
import datasets
dataset = datasets.load_dataset("MLRS/korpus_malti")
```
All sentences are combined together and shuffled, without preserving the sentence order.
No other annotations are present, so an instance would be of the following form:
```json
{
"text": "Din hija sentenza."
}
```
The training/validation/testing split is what was used to train the [BERTu](https://huggingface.co/MLRS/BERTu) model.
### Domain-split data
All other configurations contain a subset of the data.
For instance, this loads the Wikipedia portion:
```python
import datasets
dataset = datasets.load_dataset("MLRS/korpus_malti", "wiki")
```
For these configurations the data is not shuffled, so the sentence order on a document level is preserved.
An instance from these configurations would take the following form:
```json
{
"text": ["Din hija sentenza.", "U hawn oħra!"],
}
```
The raw data files contain additional metadata.
Its structure differs from one instance to another, depending on what's available from the source.
This information was typically scraped from the source itself & minimal processing is performed on such data.
## Additional Information
### Dataset Curators
The dataset was created by [Albert Gatt](https://albertgatt.github.io), [Kurt Micallef](https://www.um.edu.mt/profile/kurtmicallef), [Marc Tanti](https://www.um.edu.mt/profile/marctanti), [Lonneke van der Plas](https://sites.google.com/site/lonnekenlp/) and [Claudia Borg](https://www.um.edu.mt/profile/claudiaborg).
### Licensing Information
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
Permissions beyond the scope of this license may be available at [https://mlrs.research.um.edu.mt/](https://mlrs.research.um.edu.mt/).
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
### Citation Information
This work was first presented in [Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese](https://aclanthology.org/2022.deeplo-1.10/).
Cite it as follows:
```bibtex
@inproceedings{BERTu,
title = "Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese",
author = "Micallef, Kurt and
Gatt, Albert and
Tanti, Marc and
van der Plas, Lonneke and
Borg, Claudia",
booktitle = "Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing",
month = jul,
year = "2022",
address = "Hybrid",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.deeplo-1.10",
doi = "10.18653/v1/2022.deeplo-1.10",
pages = "90--101",
}
```
| [
-0.6913619637489319,
-0.7907344698905945,
0.400275856256485,
-0.08128181099891663,
-0.3701074719429016,
-0.1787242293357849,
-0.5867846608161926,
-0.3116469383239746,
0.33948084712028503,
0.5577808618545532,
-0.6137349009513855,
-0.5873428583145142,
-0.5146538615226746,
0.3395271301269531,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
codeparrot/github-jupyter-parsed | codeparrot | 2022-10-25T09:30:23Z | 77 | 4 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:muonolingual",
"size_categories:unknown",
"language:code",
"license:other",
"region:us"
] | 2022-10-25T09:30:23Z | 2022-07-06T13:09:04.000Z | 2022-07-06T13:09:04 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- muonolingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
# GitHub Jupyter Dataset
## Dataset Description
This is a parsed and preprocessed version of [GitHub-Jupyter Dataset](https://huggingface.co/datasets/codeparrot/github-jupyter), a dataset extracted from Jupyter Notebooks on BigQuery. We only keep markdown and python cells and convert the markdown to text. Some heuristics are also applied to filter notebooks with little data and very long or very short cells.
## Licenses
Each example has the license of its associated repository. There are in total 15 licenses:
```python
[
'mit',
'apache-2.0',
'gpl-3.0',
'gpl-2.0',
'bsd-3-clause',
'agpl-3.0',
'lgpl-3.0',
'lgpl-2.1',
'bsd-2-clause',
'cc0-1.0',
'epl-1.0',
'mpl-2.0',
'unlicense',
'isc',
'artistic-2.0'
]
```
| [
-0.28879880905151367,
-0.5242307186126709,
0.49923020601272583,
0.23072277009487152,
-0.3338952958583832,
0.22531691193580627,
-0.42962899804115295,
-0.09980729967355728,
0.4543735086917877,
0.9075354933738708,
-0.2694844603538513,
-0.7816299200057983,
-0.38280805945396423,
0.2044864594936... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
embedding-data/Amazon-QA | embedding-data | 2022-08-02T03:36:27Z | 77 | 1 | embedding-data/Amazon-QA | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"region:us"
] | 2022-08-02T03:36:27Z | 2022-07-08T17:03:12.000Z | 2022-07-08T17:03:12 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/Amazon-QA
pretty_name: Amazon-QA
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "Amazon-QA"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://jmcauley.ucsd.edu/data/amazon/qa/](http://jmcauley.ucsd.edu/data/amazon/qa/)
- **Repository:** [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [Julian McAuley](https://cseweb.ucsd.edu//~jmcauley/#)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:** 247 MB
### Dataset Summary
This dataset contains Question and Answer data from Amazon.
Disclaimer: The team releasing Amazon-QA did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains pairs of query and answer sentences and is formatted as a dictionary:
```
{"query": [sentence_1], "pos": [sentence_2]}
{"query": [sentence_1], "pos": [sentence_2]}
...
{"query": [sentence_1], "pos": [sentence_2]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/Amazon-QA")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['query', 'pos'],
num_rows: 1095290
})
})
```
Review an example `i` with:
```python
dataset["train"][0]
```
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
#### Who are the source language producers?
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Annotations
#### Annotation process
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
#### Who are the annotators?
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Personal and Sensitive Information
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Discussion of Biases
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Other Known Limitations
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/s)
## Additional Information
### Dataset Curators
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Licensing Information
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Citation Information
### Contributions
| [
-0.40493685007095337,
-0.6353532671928406,
0.16065137088298798,
0.16524653136730194,
-0.1649221032857895,
-0.04062971845269203,
0.11791439354419708,
-0.3378963768482208,
0.406394362449646,
0.6458073258399963,
-0.831373929977417,
-0.7778000235557556,
-0.42854100465774536,
0.1810610890388488... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PlanTL-GOB-ES/wnli-es | PlanTL-GOB-ES | 2022-11-18T12:03:25Z | 77 | 2 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-11-18T12:03:25Z | 2022-09-16T13:51:45.000Z | 2022-09-16T13:51:45 | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: wnli-es
size_categories:
- unknown
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# WNLI-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
"A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
This dataset is a professional translation into Spanish of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks).
Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model.
### Languages
* Spanish (es)
## Dataset Structure
### Data Instances
Three tsv files.
### Data Fields
- index
- sentence 1: first sentence of the pair
- sentence 2: second sentence of the pair
- label: relation between the two sentences:
* 0: the second sentence does not entail a correct interpretation of the first one (neutral)
* 1: the second sentence entails a correct interpretation of the first one (entailment)
### Data Splits
- wnli-train-es.csv: 636 sentence pairs
- wnli-dev-es.csv: 72 sentence pairs
- wnli-test-shuffled-es.csv: 147 sentence pairs
## Dataset Creation
### Curation Rationale
We translated this dataset to contribute to the development of language models in Spanish.
### Source Data
- [GLUE Benchmark site](https://gluebenchmark.com)
#### Initial Data Collection and Normalization
This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Spanish, commissioned by [BSC TeMU](https://temu.bsc.es/) within the the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
#### Who are the source language producers?
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
### Annotations
#### Annotation process
We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Spanish.
#### Who are the annotators?
Translation was commisioned to a professional translation agency.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Contributions
[N/A]
| [
-0.13305483758449554,
-0.4404597580432892,
0.23817700147628784,
0.36158469319343567,
-0.10434237122535706,
0.02076301909983158,
-0.4069247841835022,
-0.6644535660743713,
0.4125915765762329,
0.32551631331443787,
-0.6812940835952759,
-0.8213656544685364,
-0.6952944397926331,
0.16322928667068... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
elenanereiss/german-ler | elenanereiss | 2022-10-26T08:32:17Z | 77 | 9 | dataset-of-legal-documents | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"ner, named entity recognition... | 2022-10-26T08:32:17Z | 2022-10-18T11:10:32.000Z | 2022-10-18T11:10:32 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- de
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: dataset-of-legal-documents
pretty_name: German Named Entity Recognition in Legal Documents
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- ner, named entity recognition, legal ner, legal texts, label classification
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- config: conll2003
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
---
# Dataset Card for "German LER"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/elenanereiss/Legal-Entity-Recognition](https://github.com/elenanereiss/Legal-Entity-Recognition)
- **Paper:** [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf)
- **Point of Contact:** [elena.leitner@dfki.de](elena.leitner@dfki.de)
### Dataset Summary
A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the `BIO` tagging scheme.
The dataset includes two different versions of annotations, one with a set of 19 fine-grained semantic classes (`ner_tags`) and another one with a set of 7 coarse-grained classes (`ner_coarse_tags`). There are 53,632 annotated entities in total, the majority of which (74.34 %) are legal entities, the others are person, location and organization (25.66 %).

For more details see [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf).
### Supported Tasks and Leaderboards
- **Tasks:** Named Entity Recognition
- **Leaderboards:**
### Languages
German
## Dataset Structure
### Data Instances
```python
{
'id': '1',
'tokens': ['Eine', 'solchermaßen', 'verzögerte', 'oder', 'bewusst', 'eingesetzte', 'Verkettung', 'sachgrundloser', 'Befristungen', 'schließt', '§', '14', 'Abs.', '2', 'Satz', '2', 'TzBfG', 'aus', '.'],
'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 38, 38],
'ner_coarse_tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 9, 9, 9, 9, 9, 9, 14, 14]
}
```
### Data Fields
```python
{
'id': Value(dtype='string', id=None),
'tokens': Sequence(feature=Value(dtype='string', id=None),
length=-1, id=None),
'ner_tags': Sequence(feature=ClassLabel(num_classes=39,
names=['B-AN',
'B-EUN',
'B-GRT',
'B-GS',
'B-INN',
'B-LD',
'B-LDS',
'B-LIT',
'B-MRK',
'B-ORG',
'B-PER',
'B-RR',
'B-RS',
'B-ST',
'B-STR',
'B-UN',
'B-VO',
'B-VS',
'B-VT',
'I-AN',
'I-EUN',
'I-GRT',
'I-GS',
'I-INN',
'I-LD',
'I-LDS',
'I-LIT',
'I-MRK',
'I-ORG',
'I-PER',
'I-RR',
'I-RS',
'I-ST',
'I-STR',
'I-UN',
'I-VO',
'I-VS',
'I-VT',
'O'],
id=None),
length=-1,
id=None),
'ner_coarse_tags': Sequence(feature=ClassLabel(num_classes=15,
names=['B-LIT',
'B-LOC',
'B-NRM',
'B-ORG',
'B-PER',
'B-REG',
'B-RS',
'I-LIT',
'I-LOC',
'I-NRM',
'I-ORG',
'I-PER',
'I-REG',
'I-RS',
'O'],
id=None),
length=-1,
id=None)
}
```
### Data Splits
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | 53384 | 6666 | 6673 |
## Dataset Creation
### Curation Rationale
Documents in the legal domain contain multiple references to named entities, especially domain-specific named entities, i. e., jurisdictions, legal institutions, etc. Legal documents are unique and differ greatly from newspaper texts. On the one hand, the occurrence of general-domain named entities is relatively rare. On the other hand, in concrete applications, crucial domain-specific entities need to be identified in a reliable way, such as designations of legal norms and references to other legal documents (laws, ordinances, regulations, decisions, etc.). Most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines.
### Source Data
Court decisions from 2017 and 2018 were selected for the dataset, published online by the [Federal Ministry of Justice and Consumer Protection](http://www.rechtsprechung-im-internet.de). The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
#### Initial Data Collection and Normalization
From the table of [contents](http://www.rechtsprechung-im-internet.de/rii-toc.xml), 107 documents from each court were selected (see Table 1). The data was collected from the XML documents, i. e., it was extracted from the XML elements `Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel`. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed.
The extracted data was split into sentences, tokenised using [SoMaJo](https://github.com/tsproisl/SoMaJo) and manually annotated in [WebAnno](https://webanno.github.io/webanno/).
#### Who are the source language producers?
The Federal Ministry of Justice and the Federal Office of Justice provide selected decisions. Court decisions were produced by humans.
### Annotations
#### Annotation process
For more details see [annotation guidelines](https://github.com/elenanereiss/Legal-Entity-Recognition/blob/master/docs/Annotationsrichtlinien.pdf) (in German).
<!-- #### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
### Personal and Sensitive Information
A fundamental characteristic of the published decisions is that all personal information have been anonymised for privacy reasons. This affects the classes person, location and organization.
<!-- ## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
### Licensing Information
[CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2003.13016,
doi = {10.48550/ARXIV.2003.13016},
url = {https://arxiv.org/abs/2003.13016},
author = {Leitner, Elena and Rehm, Georg and Moreno-Schneider, Julián},
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A Dataset of German Legal Documents for Named Entity Recognition},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
| [
-0.7822085022926331,
-0.5533791184425354,
0.2689095437526703,
-0.010308031924068928,
-0.32737472653388977,
-0.08060919493436813,
-0.38647931814193726,
-0.5032247304916382,
0.4629453122615814,
0.405778706073761,
-0.534294843673706,
-1.0850528478622437,
-0.6282128691673279,
0.342535257339477... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/biorelex | bigbio | 2022-12-22T15:44:10Z | 77 | 2 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:44:10Z | 2022-11-13T22:07:24.000Z | 2022-11-13T22:07:24 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: BioRelEx
homepage: https://github.com/YerevaNN/BioRelEx
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioRelEx
## Dataset Description
- **Homepage:** https://github.com/YerevaNN/BioRelEx
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE,COREF
BioRelEx is a biological relation extraction dataset. Version 1.0 contains 2010
annotated sentences that describe binding interactions between various
biological entities (proteins, chemicals, etc.). 1405 sentences are for
training, another 201 sentences are for validation. They are publicly available
at https://github.com/YerevaNN/BioRelEx/releases. Another 404 sentences are for
testing which are kept private for at this Codalab competition
https://competitions.codalab.org/competitions/20468. All sentences contain words
"bind", "bound" or "binding". For every sentence we provide: 1) Complete
annotations of all biological entities that appear in the sentence 2) Entity
types (32 types) and grounding information for most of the proteins and families
(links to uniprot, interpro and other databases) 3) Coreference between entities
in the same sentence (e.g. abbreviations and synonyms) 4) Binding interactions
between the annotated entities 5) Binding interaction types: positive, negative
(A does not bind B) and neutral (A may bind to B)
## Citation Information
```
@inproceedings{khachatrian2019biorelex,
title = "{B}io{R}el{E}x 1.0: Biological Relation Extraction Benchmark",
author = "Khachatrian, Hrant and
Nersisyan, Lilit and
Hambardzumyan, Karen and
Galstyan, Tigran and
Hakobyan, Anna and
Arakelyan, Arsen and
Rzhetsky, Andrey and
Galstyan, Aram",
booktitle = "Proceedings of the 18th BioNLP Workshop and Shared Task",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W19-5019",
doi = "10.18653/v1/W19-5019",
pages = "176--190"
}
```
| [
-0.5455440878868103,
-0.4069489538669586,
0.26421549916267395,
0.1672934740781784,
-0.4045923054218292,
-0.18744266033172607,
-0.18791672587394714,
-0.7645180225372314,
0.3460372984409332,
0.35034310817718506,
-0.6055887341499329,
-0.7960687875747681,
-0.5833117365837097,
0.690273761749267... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hzwang2/cats_image | hzwang2 | 2023-04-20T01:42:18Z | 77 | 0 | null | [
"region:us"
] | 2023-04-20T01:42:18Z | 2023-04-17T06:10:27.000Z | 2023-04-17T06:10:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
under-tree/prepared-yagpt | under-tree | 2023-05-18T12:26:50Z | 77 | 1 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ru",
"region:us"
] | 2023-05-18T12:26:50Z | 2023-05-18T12:17:21.000Z | 2023-05-18T12:17:21 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 42680359.78397168
num_examples: 53550
- name: test
num_bytes: 7532625.216028317
num_examples: 9451
download_size: 25066987
dataset_size: 50212985
task_categories:
- conversational
- text-generation
language:
- ru
pretty_name: Dialogue Dataset for YAGPT ChatBot
size_categories:
- 10K<n<100K
---
# Dataset Card for "prepared-yagpt"
## Short Description
This dataset is aimed for training of chatbots on russian language.
It consists plenty of dialogues that allows you to train you model answer user prompts.
## Notes
1. Special tokens
- history, speaker1, speaker2 (history can be optionally removed, i.e. substituted on empty string)
2. Dataset is based on
- [Matreshka](https://huggingface.co/datasets/zjkarina/matreshka)
- [Yandex-Q](https://huggingface.co/datasets/its5Q/yandex-q)
- [Diasum](https://huggingface.co/datasets/bragovo/diasum)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.38716304302215576,
-0.8492510318756104,
0.06487737596035004,
-0.0005728653050027788,
-0.3543834686279297,
0.032645612955093384,
-0.2968641221523285,
0.10839059948921204,
0.3730628192424774,
0.49816033244132996,
-1.0123542547225952,
-0.5621415376663208,
-0.24414846301078796,
-0.167960122... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zelros/insurance-fr | zelros | 2023-10-21T13:19:38Z | 77 | 0 | null | [
"insurance",
"region:us"
] | 2023-10-21T13:19:38Z | 2023-09-01T07:16:26.000Z | 2023-09-01T07:16:26 | ---
tags:
- insurance
---
This dataset contains question/answer pairs from a French home insurance (MRH: Multi-Risk Home Insurance).
It comes from structuring the following open sources:
- https://www.mma.fr/assurance-habitation.html
- https://cap.mma.fr/files/live/sites/mmafr/files/documents-cg/cg410/Habitation_MMA_410p.pdf
The objective of this dataset is to contribute to open source research projects aiming to, for instance:
* fine-tune LLMs on high-quality datasets, specializing them in the insurance domain
* develop new question/answer applications using Retrieval Augmented Generation (RAG) for insurance contracts
* assess the knowledge of language models in the insurance field
* more generally, apply LLMs to the insurance domain for better understanding and increased transparency of this industry.
Other datasets of the same kind (but on other types of insurance, other languages, or from different sources) are also available - or will be available soon - and are part of this research effort. | [
-0.09301181882619858,
-0.7904888391494751,
0.15806996822357178,
0.11488655209541321,
0.23103205859661102,
-0.1283571720123291,
0.18299546837806702,
-0.3308487832546234,
0.24527227878570557,
1.070755958557129,
-0.5165072083473206,
-0.49608638882637024,
-0.38175997138023376,
-0.1600608974695... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MouseTrap/maow_maow_dataset | MouseTrap | 2023-09-02T04:48:35Z | 77 | 0 | null | [
"region:us"
] | 2023-09-02T04:48:35Z | 2023-09-02T04:47:13.000Z | 2023-09-02T04:47:13 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DarrenLo/ygo_tcg_test | DarrenLo | 2023-11-03T07:47:59Z | 77 | 0 | null | [
"region:us"
] | 2023-11-03T07:47:59Z | 2023-09-21T18:33:07.000Z | 2023-09-21T18:33:07 | configs:
- config_name: default
data_files:
- split: train
path: "strain.json"
- split: test
path: "stets.json"
- split: validation
path: "sval.json" | [
-0.7485782504081726,
-0.6076139807701111,
0.272984117269516,
0.22668997943401337,
-0.6794007420539856,
0.315927118062973,
-0.028012720867991447,
0.19432364404201508,
0.34451213479042053,
0.5802699327468872,
-0.7865074872970581,
-0.44169166684150696,
-0.5942862629890442,
0.42211446166038513... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juraj-juraj/doc_gen | juraj-juraj | 2023-11-27T18:34:33Z | 77 | 0 | null | [
"task_categories:text-generation",
"language:en",
"license:mit",
"region:us"
] | 2023-11-27T18:34:33Z | 2023-09-28T19:51:32.000Z | 2023-09-28T19:51:32 | ---
language:
- en
license: mit
task_categories:
- text-generation
pretty_name: py_code_doc
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: docstring
dtype: string
- name: function
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 270465917
num_examples: 313366
- name: validation
num_bytes: 763140
num_examples: 1000
- name: test
num_bytes: 878385
num_examples: 1000
download_size: 107450380
dataset_size: 272107442
---
# Code documentation dataset
This dataset aims leverage usage of lm to automatically generate documenation to undocumented python code. Dataset consists of pairs code and its documenation
Content of dataset is created from CodeSearchNet dataset. | [
-0.22042983770370483,
-0.22189098596572876,
-0.026661574840545654,
0.06046229228377342,
-0.002358001423999667,
-0.07758414000272751,
-0.20023681223392487,
0.0385301299393177,
0.19256223738193512,
0.8717818260192871,
-0.5779613852500916,
-0.5617327690124512,
0.11157727241516113,
0.225118219... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
akjindal53244/Arithmo-Data | akjindal53244 | 2023-10-22T18:14:49Z | 77 | 22 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-22T18:14:49Z | 2023-10-13T08:30:01.000Z | 2023-10-13T08:30:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: combined_MathInstruct_MetaMathQA_LilaOOD_train.json
- split: test
path: combined_MathInstruct_MetaMathQA_LilaOOD_test.json
license: apache-2.0
---
Arithmo dataset is prepared as combination of [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), and [lila ood](https://huggingface.co/datasets/allenai/lila/viewer/ood). Refer to [Model Training Data](https://github.com/akjindal53244/Arithmo-Mistral-7B#model-training-data) section in Arithmo-Mistral-7B project GitHub page for more details.
<h2 id="References">References</h2>
```
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
@article{Yue2023mammoth,
title={MAmmoTH: Building math generalist models through hybrid instruction tuning},
author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen},
journal={arXiv preprint arXiv:2309.05653},
year={2023}
}
@article{mishra2022lila,
title={Lila: A unified benchmark for mathematical reasoning},
author={Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan},
journal={arXiv preprint arXiv:2210.17517},
year={2022}
}
```
| [
-0.42708808183670044,
-0.5902897715568542,
0.17145486176013947,
0.040449775755405426,
-0.08987315744161606,
-0.3336728513240814,
-0.11851197481155396,
-0.09321410953998566,
-0.28861087560653687,
0.5960109233856201,
-0.7219766974449158,
-0.4616623818874359,
-0.3731740117073059,
-0.063433773... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
salma-remyx/hf_test | salma-remyx | 2023-10-29T23:10:58Z | 77 | 0 | null | [
"region:us"
] | 2023-10-29T23:10:58Z | 2023-10-28T03:23:45.000Z | 2023-10-28T03:23:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': cat
'1': dog
splits:
- name: train
num_bytes: 7544483.0
num_examples: 16
download_size: 7547089
dataset_size: 7544483.0
---
# Dataset Card for "hf_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7290012240409851,
-0.5811499953269958,
0.08010102808475494,
0.3266262412071228,
-0.09276947379112244,
-0.06698384881019592,
0.2784292995929718,
-0.08753052353858948,
0.7433969974517822,
0.39484354853630066,
-0.8378319144248962,
-0.6688894629478455,
-0.4715738594532013,
-0.05482970550656... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Flyfer/CBDataSetBeta1 | Flyfer | 2023-11-22T12:39:56Z | 77 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-22T12:39:56Z | 2023-11-03T13:00:48.000Z | 2023-11-03T13:00:48 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Miaosen/openai-humaneval-sky-shadow | Miaosen | 2023-11-12T09:44:37Z | 77 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"license:apache-2.0",
"code",
"arxiv:2310.19341",
"region:us"
] | 2023-11-12T09:44:37Z | 2023-11-10T09:40:36.000Z | 2023-11-10T09:40:36 | ---
license: apache-2.0
task_categories:
- text-classification
tags:
- code
size_categories:
- n<1K
---
# Shadow Humaneval dataset
This dataset is generated by GPT-4 to mimic [openai-humaneval](https://huggingface.co/datasets/openai_humaneval) dataset. Each problem of HumanEval has a corresponding shadow problem in this dataset.
The usage of this dataset is to check Whether a code generation model has data leakage during its training progress. You can refer to [Skywork](https://arxiv.org/pdf/2310.19341.pdf) for further details.
| [
-0.13762128353118896,
-0.41803205013275146,
0.40992555022239685,
-0.12517260015010834,
-0.13465002179145813,
-0.03669734671711922,
0.42978429794311523,
-0.31049901247024536,
0.08913248032331467,
0.9783324003219604,
-1.0373347997665405,
-0.4511103928089142,
-0.12484528869390488,
-0.00248230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kunishou/oasst1-chat-44k-ja | kunishou | 2023-11-12T15:54:51Z | 77 | 3 | null | [
"language:ja",
"license:apache-2.0",
"region:us"
] | 2023-11-12T15:54:51Z | 2023-11-12T07:53:04.000Z | 2023-11-12T07:53:04 | ---
license: apache-2.0
language:
- ja
---
[oasst1-89k-ja](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)をチャット形式に変換したデータセットになります。
マルチターン会話でのファインチューニングをする際にご活用下さい(1レコードのトークン長が大きいのでそれなりの計算リソースが必要になります)。
oasst1-ja-89k Repository
https://github.com/kunishou/oasst1-89k-ja
OpenAssistant/oasst1
https://huggingface.co/datasets/OpenAssistant/oasst1 | [
-0.3111281991004944,
-0.5693143606185913,
0.32824796438217163,
0.34039103984832764,
-0.494450181722641,
-0.2617373466491699,
-0.033381301909685135,
-0.24095405638217926,
0.9502771496772766,
0.396320641040802,
-1.058834433555603,
-0.7711108922958374,
-0.48543810844421387,
0.0518343597650527... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxie/pascal-voc-2012 | jxie | 2023-11-16T06:41:31Z | 77 | 0 | null | [
"region:us"
] | 2023-11-16T06:41:31Z | 2023-11-15T00:45:48.000Z | 2023-11-15T00:45:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
splits:
- name: train
num_bytes: 1115392514.736
num_examples: 10582
- name: val
num_bytes: 164042133.619
num_examples: 1449
download_size: 1405650165
dataset_size: 1279434648.355
---
# Dataset Card for "pascal-voc-2012"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7361428141593933,
0.0371767021715641,
0.2323700487613678,
0.5228434801101685,
-0.1699400693178177,
-0.201532244682312,
0.4747447669506073,
-0.13012325763702393,
0.6839536428451538,
0.5068584084510803,
-0.8480147123336792,
-0.8791932463645935,
-0.5725979208946228,
-0.2911096513271332,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seonglae/nq_open-validation | seonglae | 2023-11-25T12:30:04Z | 77 | 0 | null | [
"region:us"
] | 2023-11-25T12:30:04Z | 2023-11-15T18:04:11.000Z | 2023-11-15T18:04:11 | ---
dataset_info:
- config_name: psgs_w100.dpr_nq.10_dpr-reader-single-nq-base
features:
- name: question
dtype: string
- name: answer
sequence: string
- name: retrieved
dtype: string
- name: summary
dtype: 'null'
- name: predicted
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 22463646
num_examples: 3610
download_size: 13072401
dataset_size: 22463646
- config_name: psgs_w100.dpr_nq.10_lsg-bart-base-4096-booksum.1_dpr-reader-single-nq-base
features:
- name: question
dtype: string
- name: answer
sequence: string
- name: retrieved
dtype: string
- name: summary
dtype: string
- name: predicted
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 29791480
num_examples: 3610
download_size: 15853782
dataset_size: 29791480
- config_name: psgs_w100.dpr_nq.10_resrer-bart-base.1_dpr-reader-single-nq-base
features:
- name: question
dtype: string
- name: answer
sequence: string
- name: retrieved
dtype: string
- name: summary
dtype: string
- name: predicted
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 28798929
num_examples: 3610
download_size: 17159760
dataset_size: 28798929
- config_name: psgs_w100.dpr_nq.10_resrer-bart-base.1_gpt-3.5-turbo
features:
- name: question
dtype: string
- name: answer
sequence: string
- name: retrieved
dtype: string
- name: summary
dtype: string
- name: predicted
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 28807451
num_examples: 3610
download_size: 17139973
dataset_size: 28807451
- config_name: psgs_w100.dpr_nq.10_resrer-pegasus-x.1_dpr-reader-single-nq-base
features:
- name: question
dtype: string
- name: answer
sequence: string
- name: retrieved
dtype: string
- name: summary
dtype: string
- name: predicted
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 30337257
num_examples: 3610
download_size: 17579283
dataset_size: 30337257
- config_name: psgs_w100.dpr_nq.1_dpr-reader-single-nq-base
features:
- name: question
dtype: string
- name: answer
sequence: string
- name: retrieved
dtype: string
- name: summary
dtype: 'null'
- name: predicted
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 2610668
num_examples: 3610
download_size: 1748149
dataset_size: 2610668
- config_name: psgs_w100.dpr_nq.1_gpt-3.5-turbo
features:
- name: question
dtype: string
- name: answer
sequence: string
- name: retrieved
dtype: string
- name: summary
dtype: 'null'
- name: predicted
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 2628010
num_examples: 3610
download_size: 1732708
dataset_size: 2628010
configs:
- config_name: psgs_w100.dpr_nq.10_dpr-reader-single-nq-base
data_files:
- split: train
path: psgs_w100.dpr_nq.10_dpr-reader-single-nq-base/train-*
- config_name: psgs_w100.dpr_nq.10_lsg-bart-base-4096-booksum.1_dpr-reader-single-nq-base
data_files:
- split: train
path: psgs_w100.dpr_nq.10_lsg-bart-base-4096-booksum.1_dpr-reader-single-nq-base/train-*
- config_name: psgs_w100.dpr_nq.10_resrer-bart-base.1_dpr-reader-single-nq-base
data_files:
- split: train
path: psgs_w100.dpr_nq.10_resrer-bart-base.1_dpr-reader-single-nq-base/train-*
- config_name: psgs_w100.dpr_nq.10_resrer-bart-base.1_gpt-3.5-turbo
data_files:
- split: train
path: psgs_w100.dpr_nq.10_resrer-bart-base.1_gpt-3.5-turbo/train-*
- config_name: psgs_w100.dpr_nq.10_resrer-pegasus-x.1_dpr-reader-single-nq-base
data_files:
- split: train
path: psgs_w100.dpr_nq.10_resrer-pegasus-x.1_dpr-reader-single-nq-base/train-*
- config_name: psgs_w100.dpr_nq.1_dpr-reader-single-nq-base
data_files:
- split: train
path: psgs_w100.dpr_nq.1_dpr-reader-single-nq-base/train-*
- config_name: psgs_w100.dpr_nq.1_gpt-3.5-turbo
data_files:
- split: train
path: psgs_w100.dpr_nq.1_gpt-3.5-turbo/train-*
---
# Dataset Card for "nq_open-validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42582714557647705,
-0.2055111676454544,
0.0967729315161705,
0.0993952676653862,
-0.10306303948163986,
-0.14230981469154358,
0.31530874967575073,
0.08231252431869507,
0.5413745641708374,
0.5400859117507935,
-0.8062412738800049,
-0.7814821004867554,
-0.1434973031282425,
-0.053224250674247... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sade-adrien/redpajama_v2_sample_10M | sade-adrien | 2023-11-21T03:30:57Z | 77 | 0 | null | [
"region:us"
] | 2023-11-21T03:30:57Z | 2023-11-20T22:02:53.000Z | 2023-11-20T22:02:53 | ---
dataset_info:
features:
- name: raw_content
dtype: string
- name: doc_id
dtype: string
- name: meta
dtype: string
- name: quality_signals
dtype: string
splits:
- name: train
num_bytes: 104068894016
num_examples: 10000000
download_size: 0
dataset_size: 104068894016
---
# Dataset Card for "redpajama_v2_sample_10M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5479225516319275,
-0.2924707233905792,
0.10857357084751129,
0.6089134812355042,
-0.33217647671699524,
-0.23522289097309113,
0.052731096744537354,
-0.43142926692962646,
1.029320240020752,
0.7485741972923279,
-0.846649706363678,
-0.6368100643157959,
-0.8637256622314453,
-0.287964403629303... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hyperinfer/justizscrap_verkehrsunfall | hyperinfer | 2023-11-21T14:16:07Z | 77 | 0 | null | [
"region:us"
] | 2023-11-21T14:16:07Z | 2023-11-21T14:15:21.000Z | 2023-11-21T14:15:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 200301130.7274635
num_examples: 8386
- name: test
num_bytes: 50087225.27253649
num_examples: 2097
download_size: 127470636
dataset_size: 250388356.0
---
# Dataset Card for "justizscrap_verkehrsunfall"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.38344255089759827,
-0.25676873326301575,
0.30602455139160156,
0.4533189833164215,
-0.11844450235366821,
0.1108393594622612,
0.3350662887096405,
-0.15555652976036072,
0.8573423027992249,
0.8000912070274353,
-1.0709465742111206,
-0.8907369375228882,
-0.5028157234191895,
-0.298079311847686... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/blimp_classification | metaeval | 2023-01-09T10:50:25Z | 76 | 1 | null | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"cola",
"region:us"
] | 2023-01-09T10:50:25Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- acceptability-classification
language:
- en
tags:
- cola
---
Blimp with the coarse categories and recasted as a classification task (Cola format). | [
0.16146214306354523,
-0.39582470059394836,
-0.21291710436344147,
0.6120795607566833,
-0.12163777649402618,
0.09015198051929474,
-0.3233794569969177,
-0.18851469457149506,
0.36315444111824036,
0.47653067111968994,
-0.3759320080280304,
-0.2370384782552719,
-0.755017876625061,
0.0322975181043... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pmc/open_access | pmc | 2023-03-14T17:29:59Z | 76 | 12 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"license:cc-by-4.0",
"license:cc-b... | 2023-03-14T17:29:59Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
- cc-by-4.0
- cc-by-sa-4.0
- cc-by-nd-4.0
- cc-by-nc-4.0
- cc-by-nc-sa-4.0
- cc-by-nc-nd-4.0
- other
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: PMC Open Access
---
# Dataset Card for PMC Open Access Subset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [PubMed Central](mailto:pubmedcentral@ncbi.nlm.nih.gov)
### Dataset Summary
The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse. Not all articles in PMC are available for text mining and other reuse, many have
copyright protection, however articles in the PMC Open Access Subset are made available under Creative Commons or
similar licenses that generally allow more liberal redistribution and reuse than a traditional copyrighted work. The
PMC Open Access Subset is one part of the PMC Article Datasets.
Within the PMC Open Access Subset, there are three groupings:
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
- Other - no machine-readable Creative Commons license, no license, or a custom license.
### Supported Tasks and Leaderboards
- Language modeling
### Languages
English (`en`).
## Dataset Structure
### Data Instances
```
{
'text': "==== Front\nPLoS BiolPLoS BiolpbioplosbiolPLoS Biology1544-91731545-7885Public Library of Science San Francisco, USA 10.1371/journal.pbio.0000005Research ArticleGenetics/Genomics/Gene TherapyInfectious DiseasesMicrobiologyPlasmodiumThe Transcriptome of the Intraerythrocytic Developmental Cycle of Plasmodium falciparum\n P. falciparum IDC TranscriptomeBozdech Zbynek \n1\nLlinás Manuel \n1\nPulliam Brian Lee \n1\nWong Edith D \n1\nZhu Jingchun \n2\nDeRisi Joseph L joe@derisilab.ucsf.edu\n1\n1Department of Biochemistry and Biophysics, University of California, San FranciscoSan Francisco, CaliforniaUnited States of America2Department of Biological and Medical Informatics, University of California, San FranciscoSan Francisco, CaliforniaUnited States of America10 2003 18 8 2003 18 8 2003 1 1 e512 6 2003 25 7 2003 Copyright: ©2003 Bozdech et al.2003This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.\nMicroarray Analysis: Genome-Scale Hypothesis Scanning \n\nMonitoring Malaria: Genomic Activity of the Parasite in Human Blood Cells \n\nPlasmodium falciparum is the causative agent of the most burdensome form of human malaria, affecting 200–300 million individuals per year worldwide. The recently sequenced genome of P. falciparum revealed over 5,400 genes, of which 60% encode proteins of unknown function. Insights into the biochemical function and regulation of these genes will provide the foundation for future drug and vaccine development efforts toward eradication of this disease. By analyzing the complete asexual intraerythrocytic developmental cycle (IDC) transcriptome of the HB3 strain of P. falciparum, we demonstrate that at least 60% of the genome is transcriptionally active during this stage. Our data demonstrate that this parasite has evolved an extremely specialized mode of transcriptional regulation that produces a continuous cascade of gene expression, beginning with genes corresponding to general cellular processes, such as protein synthesis, and ending with Plasmodium-specific functionalities, such as genes involved in erythrocyte invasion. The data reveal that genes contiguous along the chromosomes are rarely coregulated, while transcription from the plastid genome is highly coregulated and likely polycistronic. Comparative genomic hybridization between HB3 and the reference genome strain (3D7) was used to distinguish between genes not expressed during the IDC and genes not detected because of possible sequence variations...
'pmid': '12929205',
'accession_id': 'PMC176545',
'license': 'CC BY',
'last_updated': '2021-01-05 08:21:03',
'retracted': 'no',
'citation': 'PLoS Biol. 2003 Oct 18; 1(1):e5'
}
```
### Data Fields
- `text`: Text content.
- `pmid`: PubMed ID.
- `accession_id`: Unique identifier for a sequence record.
- `license`: License type.
- `last_updated`: Date of last update.
- `retracted`: Whether retracted or not.
- `citation`: Citation reference.
### Data Splits
The dataset is not split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License terms vary. Please refer to the license statement in each article for specific terms of use.
Within the PMC Open Access Subset, there are three groupings based on available license terms:
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses;
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
- Other - no machine-readable Creative Commons license, no license, or a custom license.
### Citation Information
```
PMC Open Access Subset [Internet]. Bethesda (MD): National Library of Medicine. 2003 - [cited YEAR MONTH DAY]. Available from https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
| [
-0.4705865681171417,
-0.5503160357475281,
0.42440110445022583,
0.49247848987579346,
-0.21549773216247559,
-0.06327377259731293,
-0.1443694680929184,
-0.04809874668717384,
0.26067399978637695,
0.40663832426071167,
-0.6249359846115112,
-0.9704340100288391,
-0.6438432335853577,
0.478739142417... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/viquiquad | projecte-aina | 2023-09-13T12:44:04Z | 76 | 1 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ca",
"license:cc-by-sa-4.0",
"arxiv:2107.07903",
"arxiv:1606.0525... | 2023-09-13T12:44:04Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ViquiQuAD
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4562345#.YK41aqGxWUk
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
### Dataset Summary
ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia.
This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations) articles in the Catalan Wikipedia "[Viquipèdia](https://ca.wikipedia.org/wiki/Portada)", and 1 to 5 questions with their answer for each fragment.
Viquipedia articles are used under [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
This dataset can be used to fine-tune and evaluate extractive-QA and Language Models.
### Supported Tasks and Leaderboards
Extractive-QA, Language Model
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
'id': 'P_66_C_391_Q1',
'title': 'Xavier Miserachs i Ribalta',
'context': "En aquesta època es va consolidar el concepte modern del reportatge fotogràfic, diferenciat del fotoperiodisme[n. 2] i de la fotografia documental,[n. 3] pel que fa a l'abast i el concepte. El reportatge fotogràfic implica més la idea de relat: un treball que vol més dedicació de temps, un esforç d'interpretació d'una situació i que culmina en un conjunt d'imatges. Això implica, d'una banda, la reivindicació del fotògraf per opinar, fet que li atorgarà estatus d'autor; l'autor proposa, doncs, una interpretació pròpia de la realitat. D'altra banda, el consens que s'estableix entre la majoria de fotògrafs és que el vehicle natural de la imatge fotogràfica és la pàgina impresa. Això suposà que revistes com Life, Paris-Match, Stern o Época assolissin la màxima esplendor en aquest període.",
'question': 'De què es diferenciava el reportatge fotogràfic?',
'answers': [{
'text': 'del fotoperiodisme[n. 2] i de la fotografia documental',
'answer_start': 92
}]
}
```
### Data Fields
Follows [Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the Wikipedia article.
- `context` (str): Wikipedia section text.
- `question` (str): Question.
- `answers` (list): List of answers to the question, each containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train: 11259 examples
- developement: 1493 examples
- test: 1428 examples
## Dataset Creation
### Curation Rationale
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Source Data
- [Catalan Wikipedia](https://ca.wikipedia.org)
#### Initial Data Collection and Normalization
The source data are scraped articles from the [Catalan wikipedia](https://ca.wikipedia.org) site.
From a set of high quality, non-translation, articles inCA the Catalan Wikipedia, 597 were randomly chosen, and from them 3111, 5-8 sentence contexts were extracted. We commissioned creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)). In total, 15153 pairs of a question and an extracted fragment that contains the answer were created.
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
#### Who are the source language producers?
Volunteers who collaborate with Catalan Wikipedia.
### Annotations
#### Annotation process
We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)).
#### Who are the annotators?
Annotation was commissioned to an specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4562344)
### Contributions
[N/A] | [
-0.4619498550891876,
-0.6251985430717468,
0.08791939914226532,
0.46120956540107727,
-0.2787323594093323,
0.0363980308175087,
-0.19992762804031372,
-0.4453624188899994,
0.4667823612689972,
0.27246424555778503,
-0.5646094083786011,
-0.6608983874320984,
-0.34488150477409363,
0.300237625837326... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tidrael/tsl_news | Tidrael | 2022-10-10T14:23:36Z | 76 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-10-10T14:23:36Z | 2022-10-06T04:47:14.000Z | 2022-10-06T04:47:14 | ---
annotations_creators: []
language:
- en
language_creators:
- machine-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: bussiness-news
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Top news headline in finance from bbc-news
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Sentiment label: Using threshold below 0 is negative (0) and above 0 is positive (1)
[More Information Needed]
### Data Splits
Train/Split Ratio is 0.9/0.1
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.5253660082817078,
-0.532231330871582,
0.08567294478416443,
0.3263111412525177,
-0.2750357687473297,
0.22359904646873474,
-0.3635888397693634,
-0.3104400038719177,
0.641435980796814,
0.5808307528495789,
-0.8921547532081604,
-1.2138534784317017,
-0.7521809935569763,
0.09710166603326797,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
duyngtr16061999/fashion_text_to_image | duyngtr16061999 | 2022-11-21T05:54:22Z | 76 | 0 | null | [
"region:us"
] | 2022-11-21T05:54:22Z | 2022-10-29T08:50:41.000Z | 2022-10-29T08:50:41 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: "Fashion captions"
size_categories:
- n<100K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.5107936263084412,
-0.5220706462860107,
0.17704978585243225,
0.2781125009059906,
-0.2932162880897522,
0.25936368107795715,
-0.4182778298854828,
-0.47044751048088074,
0.6145263314247131,
0.6596940755844116,
-0.9873849153518677,
-1.1626529693603516,
-0.7063561081886292,
0.27237606048583984... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jinmel/simpsons-blip-captions-pil | jinmel | 2023-02-22T02:17:04Z | 76 | 0 | null | [
"region:us"
] | 2023-02-22T02:17:04Z | 2023-02-22T02:02:08.000Z | 2023-02-22T02:02:08 | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 27091297.0
num_examples: 755
download_size: 26505319
dataset_size: 27091297.0
---
# Dataset Card for "simpsons-blip-captions-pil"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.627688467502594,
-0.3282206654548645,
0.09571997821331024,
0.43920090794563293,
-0.1605338752269745,
0.2885451316833496,
-0.041038017719984055,
0.18515057861804962,
0.8367457389831543,
0.5080593824386597,
-0.9104658365249634,
-0.606299877166748,
-0.6891409754753113,
-0.05590647459030151... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nanyy1025/covid_fake_news | nanyy1025 | 2023-02-24T01:36:24Z | 76 | 2 | null | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"language:en",
"arxiv:2011.03327",
"region:us"
] | 2023-02-24T01:36:24Z | 2023-02-24T01:01:04.000Z | 2023-02-24T01:01:04 | ---
task_categories:
- text-classification
- zero-shot-classification
language:
- en
---
Constraint@AAAI2021 - COVID19 Fake News Detection in English
```
@misc{patwa2020fighting,
title={Fighting an Infodemic: COVID-19 Fake News Dataset},
author={Parth Patwa and Shivam Sharma and Srinivas PYKL and Vineeth Guptha and Gitanjali Kumari and Md Shad Akhtar and Asif Ekbal and Amitava Das and Tanmoy Chakraborty},
year={2020},
eprint={2011.03327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.22930529713630676,
-0.6465731859207153,
-0.037277162075042725,
0.4029448926448822,
-0.2532970905303955,
0.09328054636716843,
-0.03316190466284752,
-0.5955715775489807,
0.378436416387558,
0.1431342214345932,
-0.8276296854019165,
-0.5321231484413147,
-0.3973698019981384,
0.613603293895721... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sam-mosaic/dolly_chatml | sam-mosaic | 2023-07-18T00:23:37Z | 76 | 0 | null | [
"language:en",
"region:us"
] | 2023-07-18T00:23:37Z | 2023-04-14T17:51:26.000Z | 2023-04-14T17:51:26 | ---
language: en
dataset_info:
features:
- name: text
dtype: string
- name: cat
dtype: string
splits:
- name: train
num_bytes: 11767434
num_examples: 8497
download_size: 5401759
dataset_size: 11767434
---
# Dataset Card for "dolly_chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3886319100856781,
-0.49461740255355835,
-0.06498264521360397,
0.2941279709339142,
-0.06309402734041214,
-0.06878244876861572,
0.30987218022346497,
-0.07633700221776962,
0.7828043699264526,
0.7011173367500305,
-0.8231070041656494,
-0.8488536477088928,
-0.690678596496582,
-0.2592371404170... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ricardosantoss/top_12_com_validacao | ricardosantoss | 2023-10-31T11:35:56Z | 76 | 0 | null | [
"region:us"
] | 2023-10-31T11:35:56Z | 2023-10-28T19:04:53.000Z | 2023-10-28T19:04:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: Nota Clinica
dtype: string
- name: Rotulos_1
sequence: string
splits:
- name: train
num_bytes: 1059135
num_examples: 1023
- name: test
num_bytes: 216746
num_examples: 200
- name: validation
num_bytes: 224956
num_examples: 200
download_size: 458849
dataset_size: 1500837
---
# Dataset Card for "top_12_com_validacao"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5069229602813721,
-0.2381971925497055,
0.1198408305644989,
0.4713278114795685,
-0.323468416929245,
-0.04012919217348099,
0.05138186737895012,
-0.17980870604515076,
0.6913301348686218,
0.430762380361557,
-0.795585036277771,
-0.9091865420341492,
-0.8758583068847656,
0.04666014388203621,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChaiML/chaiverse_lora_testing_fandom_IO | ChaiML | 2023-11-08T11:02:20Z | 76 | 0 | null | [
"region:us"
] | 2023-11-08T11:02:20Z | 2023-11-08T11:02:18.000Z | 2023-11-08T11:02:18 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: train
num_bytes: 149058
num_examples: 100
download_size: 96520
dataset_size: 149058
---
# Dataset Card for "chaiverse_lora_testing_fandom_IO"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8155748248100281,
-0.4906172752380371,
-0.07300380617380142,
0.2893560230731964,
-0.17371444404125214,
-0.29654207825660706,
0.3668651580810547,
-0.19255727529525757,
1.1122490167617798,
0.6052331924438477,
-1.134089708328247,
-0.6659417152404785,
-0.40197741985321045,
-0.13020984828472... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
higgsfield/hacker_news_top_comment | higgsfield | 2023-11-14T02:01:02Z | 76 | 1 | null | [
"region:us"
] | 2023-11-14T02:01:02Z | 2023-11-14T01:34:14.000Z | 2023-11-14T01:34:14 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 77485794
num_examples: 118779
download_size: 52065753
dataset_size: 77485794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hacker_news_top_comment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5768261551856995,
-0.42991024255752563,
0.2189185619354248,
0.383535772562027,
-0.39238718152046204,
0.2189272940158844,
0.17513369023799896,
0.01880333572626114,
0.884326159954071,
0.5269521474838257,
-0.7863020896911621,
-0.6675485968589783,
-0.6207143068313599,
-0.257663369178772,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nguyenphuthien/vietnamese_ultrafeedback_binarized | nguyenphuthien | 2023-11-16T08:49:01Z | 76 | 0 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:vi",
"license:mit",
"region:us"
] | 2023-11-16T08:49:01Z | 2023-11-15T09:16:44.000Z | 2023-11-15T09:16:44 | ---
license: mit
task_categories:
- conversational
- text-generation
language:
- vi
pretty_name: Vietnamese UltraFeedback Binarized
size_categories:
- 10K<n<100K
--- | [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sijuade/cats_dogs_birds_latents | Sijuade | 2023-11-18T22:50:46Z | 76 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-18T22:50:46Z | 2023-11-18T22:49:49.000Z | 2023-11-18T22:49:49 | ---
license: mit
dataset_info:
features:
- name: latent
sequence:
sequence:
sequence: float32
- name: noised_latents
sequence:
sequence:
sequence: float32
- name: noise
sequence:
sequence:
sequence:
sequence: float32
- name: timesteps
dtype: float64
- name: label
dtype: int64
splits:
- name: train
num_bytes: 677448192
num_examples: 13344
download_size: 683560149
dataset_size: 677448192
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jmbrito/b3-historical-quotes | jmbrito | 2023-11-20T15:07:52Z | 76 | 0 | null | [
"size_categories:1M<n<10M",
"license:mit",
"finance",
"b3",
"quotes",
"historical",
"region:us"
] | 2023-11-20T15:07:52Z | 2023-11-20T07:00:53.000Z | 2023-11-20T07:00:53 | ---
license: mit
tags:
- finance
- b3
- quotes
- historical
pretty_name: B3 Historical Quotes
size_categories:
- 1M<n<10M
---
# B3 Historical Quotes
<!-- Provide a quick summary of the dataset. -->
This dataset is a collection of historical quotes from the brazilian stock market(B3).
It contains historical quotes from all stocks in the country from Jan/2015 until Out/2023.
## Dataset Details
All the data was retrieved as is from [B3 Historical Data](https://www.b3.com.br/en_us/market-data-and-indices/data-services/market-data/historical-data/equities/historical-quotes/)
and parsed to a csv. The columns are the same as the ones from the original content.
If you need more informations about the columns, it can be found in the [official b3 documentation](https://www.b3.com.br/en_us/market-data-and-indices/data-services/market-data/historical-data/equities/historical-quote-data/). | [
0.056550126522779465,
-0.6415448188781738,
0.15932704508304596,
0.33360621333122253,
-0.39227935671806335,
0.148286372423172,
0.29831016063690186,
-0.266692578792572,
0.2132623791694641,
0.7837847471237183,
-0.4308602213859558,
-0.811988115310669,
-0.43818536400794983,
-0.06796876341104507... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
roskoN/dailydialog | roskoN | 2021-08-06T14:14:18Z | 75 | 1 | null | [
"region:us"
] | 2021-08-06T14:14:18Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset
The data is based on the original distribution ([link to original website](http://yanran.li/dailydialog)) ([link to paper](https://aclanthology.org/I17-1099/)).
It is created as a convenience to enablefaster prototyping.
# License
DailyDialog dataset is licensed under CC BY-NC-SA 4.0.
If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. Any third party annotation is welcome. Note the dataset may not be adopted for commercial use. | [
-0.2823207974433899,
-0.6556082963943481,
0.604505181312561,
0.3819909691810608,
-0.19545738399028778,
0.09075422585010529,
0.03429069742560387,
-0.3802712559700012,
0.2028656005859375,
0.8541926145553589,
-1.1558865308761597,
-0.6315131187438965,
-0.17018528282642365,
0.3407289981842041,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bingsu/KSS_Dataset | Bingsu | 2022-07-02T00:10:10Z | 75 | 5 | null | [
"task_categories:text-to-speech",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-07-02T00:10:10Z | 2022-04-19T06:59:21.000Z | 2022-04-19T06:59:21 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Korean Single Speaker Speech Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-to-speech
task_ids: []
---
## Dataset Description
- **Homepage:** [Korean Single Speaker Speech Dataset](https://www.kaggle.com/datasets/bryanpark/korean-single-speaker-speech-dataset)
- **Repository:** [Kyubyong/kss](https://github.com/Kyubyong/kss)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
# Description of the original author
### KSS Dataset: Korean Single speaker Speech Dataset
KSS Dataset is designed for the Korean text-to-speech task. It consists of audio files recorded by a professional female voice actoress and their aligned text extracted from my books. As a copyright holder, by courtesy of the publishers, I release this dataset to the public. To my best knowledge, this is the first publicly available speech dataset for Korean.
### File Format
Each line in `transcript.v.1.3.txt` is delimited by `|` into six fields.
- A. Audio file path
- B. Original script
- C. Expanded script
- D. Decomposed script
- E. Audio duration (seconds)
- F. English translation
e.g.,
1/1_0470.wav|저는 보통 20분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|4.1|I usually take a nap for 20 minutes.
### Specification
- Audio File Type: wav
- Total Running Time: 12+ hours
- Sample Rate: 44,100 KHZ
- Number of Audio Files: 12,853
- Sources
- |1| [Kyubyong Park, 500 Basic Korean Verbs, Tuttle Publishing, 2015.](https://www.amazon.com/500-Basic-Korean-Verbs-Comprehensive/dp/0804846057/ref=sr_1_1?s=books&ie=UTF8&qid=1522911616&sr=1-1&keywords=kyubyong+park)|
- |2| [Kyubyong Park, 500 Basic Korean Adjectives 2nd Ed., Youkrak, 2015.](http://www.hanbooks.com/500bakoad.html)|
- |3| [Kyubyong Park, Essential Korean Vocabulary, Tuttle Publishing, 2015.](https://www.amazon.com/Essential-Korean-Vocabulary-Phrases-Fluently/dp/0804843252/ref=sr_1_3?s=books&ie=UTF8&qid=1522911806&sr=1-3&keywords=kyubyong+park)|
- |4| [Kyubyong Park, Tuttle Learner's Korean-English Dictionary, Tuttle Publishing, 2012.](https://www.amazon.com/Tuttle-Learners-Korean-English-Dictionary-Essential/dp/0804841500/ref=sr_1_8?s=books&ie=UTF8&qid=1522911806&sr=1-8&keywords=kyubyong+park)|
### License
NC-SA 4.0. You CANNOT use this dataset for ANY COMMERCIAL purpose. Otherwise, you can freely use this.
### Citation
If you want to cite KSS Dataset, please refer to this:
Kyubyong Park, KSS Dataset: Korean Single speaker Speech Dataset, https://kaggle.com/bryanpark/korean-single-speaker-speech-dataset, 2018
### Reference
Check out [this](https://github.com/Kyubyong/kss) for a project using this KSS Dataset.
### Contact
You can contact me at kbpark.linguist@gmail.com.
April, 2018.
Kyubyong Park
### Dataset Summary
12,853 Korean audio files with transcription.
### Supported Tasks and Leaderboards
text-to-speech
### Languages
korean
## Dataset Structure
### Data Instances
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/KSS_Dataset")
>>> dataset["train"].features
{'audio': Audio(sampling_rate=44100, mono=True, decode=True, id=None),
'original_script': Value(dtype='string', id=None),
'expanded_script': Value(dtype='string', id=None),
'decomposed_script': Value(dtype='string', id=None),
'duration': Value(dtype='float32', id=None),
'english_translation': Value(dtype='string', id=None)}
```
```python
>>> dataset["train"][0]
{'audio': {'path': None,
'array': array([ 0.00000000e+00, 3.05175781e-05, -4.57763672e-05, ...,
0.00000000e+00, -3.05175781e-05, -3.05175781e-05]),
'sampling_rate': 44100},
'original_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'expanded_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'decomposed_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'duration': 3.5,
'english_translation': 'He seemed to be pretending to be okay.'}
```
### Data Splits
| | train |
|---------------|------:|
| # of examples | 12853 | | [
-0.19560711085796356,
-0.38538798689842224,
0.07867148518562317,
0.4378615617752075,
-0.4287244379520416,
-0.02281286008656025,
-0.454788476228714,
-0.013731252402067184,
0.4472510516643524,
0.46037185192108154,
-0.5481445789337158,
-0.8751921653747559,
-0.4349059760570526,
0.0234946962445... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FinanceInc/auditor_sentiment | FinanceInc | 2022-07-21T19:03:51Z | 75 | 12 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | 2022-07-21T19:03:51Z | 2022-07-21T18:25:47.000Z | 2022-07-21T18:25:47 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
paperswithcode_id: null
pretty_name: Auditor_Sentiment
---
# Dataset Card for Auditor Sentiment
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
Auditor review sentiment collected by News Department
- **Point of Contact:**
Talked to COE for Auditing, currently sue@demo.org
### Dataset Summary
Auditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment.
### Supported Tasks and Leaderboards
Sentiment Classification
### Languages
English
## Dataset Structure
### Data Instances
```
"sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .",
"label": "negative"
```
### Data Fields
- sentence: a tokenized line from the dataset
- label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0)
### Data Splits
A train/test split was created randomly with a 75/25 split
## Dataset Creation
### Curation Rationale
To gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance.
### Source Data
#### Initial Data Collection and Normalization
The corpus used in this paper is made out of English news reports.
#### Who are the source language producers?
The source data was written by various auditors.
### Annotations
#### Annotation process
This release of the auditor reviews covers a collection of 4840
sentences. The selected collection of phrases was annotated by 16 people with
adequate background knowledge on financial markets. The subset here is where inter-annotation agreement was greater than 75%.
#### Who are the annotators?
They were pulled from the SME list, names are held by sue@demo.org
### Personal and Sensitive Information
There is no personal or sensitive information in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
All annotators were from the same institution and so interannotator agreement
should be understood with this taken into account.
### Licensing Information
License: Demo.Org Proprietary - DO NOT SHARE
This dataset is based on the [financial phrasebank](https://huggingface.co/datasets/financial_phrasebank) dataset. | [
-0.4669879078865051,
-0.2537909746170044,
-0.07126914709806442,
0.4227077066898346,
-0.4878196120262146,
-0.02527777850627899,
-0.17817460000514984,
-0.4451294541358948,
0.46132946014404297,
0.4834480881690979,
-0.5235334634780884,
-0.8418287038803101,
-0.6370823979377747,
0.32017281651496... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juancavallotti/multilingual-gec | juancavallotti | 2023-01-06T18:59:59Z | 75 | 3 | null | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:es",
"language:fr",
"language:de",
"license:apache-2.0",
"grammar",
"gec",
"multi language",
"language detection",
"region:us"
] | 2023-01-06T18:59:59Z | 2023-01-06T16:07:20.000Z | 2023-01-06T16:07:20 | ---
author: Juan Alberto López Cavallotti
date: Jan 6, 2023
license: apache-2.0
task_categories:
- translation
language:
- en
- es
- fr
- de
tags:
- grammar
- gec
- multi language
- language detection
pretty_name: Multi Lingual Grammar Error Correction Dataset
size_categories:
- 100K<n<1M
---
# Dataset Card for Multilingual Grammar Error Correction
## Dataset Description
- **Homepage:** https://juancavallotti.com
- **Paper:** https://blog.juancavallotti.com/2023/01/06/training-a-multi-language-grammar-error-correction-system/
- **Point of Contact:** Juan Alberto López Cavallotti
### Dataset Summary
This dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German.
This dataset was developed as a component for the [Squidigies](https://squidgies.app/) platform.
### Supported Tasks and Leaderboards
* **Grammar Error Correction:** By appending the prefix *fix grammar:* to the prrompt.
* **Language Detection:** By appending the prefix: *language:* to the prompt.
### Languages
* English
* Spanish
* French
* German
## Dataset Structure
### Data Instances
The dataset contains the following instances for each language:
* German 32282 sentences.
* English 51393 sentences.
* Spanish 67672 sentences.
* French 67157 sentences.
### Data Fields
* `lang`: The language of the sentence
* `sentence`: The original sentence.
* `modified`: The corrupted sentence.
* `transformation`: The primary transformation used by the synthetic data generator.
* `sec_transformation`: The secondary transformation (if any) used by the synthetic data generator.
### Data Splits
* `train`: There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric.
## Dataset Creation
### Curation Rationale
This dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet.
### Source Data
#### Initial Data Collection and Normalization
The source grammatical sentences come from various open-source datasets, such as Tatoeba.
#### Who are the source language producers?
* Juan Alberto López Cavallotti
### Annotations
#### Annotation process
The annotation is automatic and produced by the generation script.
#### Who are the annotators?
* Data generation script by Juan Alberto López Cavallotti
### Other Known Limitations
The dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results.
## Additional Information
### Dataset Curators
* Juan Alberto López Cavallotti
### Licensing Information
This dataset is distributed under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0)
### Citation Information
Please mention this original dataset and the author **Juan Alberto López Cavallotti**
### Contributions
* Juan Alberto López Cavallotti | [
-0.08026336878538132,
-0.7161480784416199,
0.233223095536232,
0.6662468314170837,
0.17768079042434692,
-0.024233488366007805,
-0.369174599647522,
-0.14179979264736176,
0.305961549282074,
0.447404682636261,
-0.7939741611480713,
-0.6738381385803223,
-0.525335431098938,
0.6562647223472595,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marriamaslova/toxic_dvach | marriamaslova | 2023-04-30T11:08:42Z | 75 | 0 | null | [
"task_categories:text-classification",
"language:ru",
"region:us"
] | 2023-04-30T11:08:42Z | 2023-04-30T11:00:31.000Z | 2023-04-30T11:00:31 | ---
task_categories:
- text-classification
language:
- ru
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/arguana-pl-qrels | clarin-knext | 2023-06-07T08:16:24Z | 75 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:16:24Z | 2023-06-06T22:13:33.000Z | 2023-06-06T22:13:33 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209915816783905,
-0.9029768109321594,
0.5094643235206604,
0.2354193478822708,
-0.3185211718082428,
-0.1491904854774475,
-0.16673950850963593,
-0.4962919354438782,
-0.018960798159241676,
0.4112257659435272,
-0.5503100752830505,
-0.691356897354126,
-0.4166182279586792,
-0.048304602503776... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChanceFocus/flare-sm-bigdata | ChanceFocus | 2023-06-25T18:15:36Z | 75 | 1 | null | [
"region:us"
] | 2023-06-25T18:15:36Z | 2023-06-25T17:55:54.000Z | 2023-06-25T17:55:54 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 18720287
num_examples: 4897
- name: valid
num_bytes: 1278834
num_examples: 798
- name: test
num_bytes: 2379111
num_examples: 1472
download_size: 11003337
dataset_size: 22378232
---
# Dataset Card for "flare-sm-bigdata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7503221035003662,
-0.2178773283958435,
0.2161424607038498,
0.27536189556121826,
-0.1541137844324112,
0.19127455353736877,
0.09874222427606583,
-0.17289431393146515,
1.1696150302886963,
0.4717659652233124,
-0.8565804958343506,
-0.5967101454734802,
-0.5409201979637146,
-0.2313903719186782... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sharmaarushi17/HPCPerfOpt-MCQA | sharmaarushi17 | 2023-11-21T03:57:13Z | 75 | 0 | null | [
"license:cc",
"region:us"
] | 2023-11-21T03:57:13Z | 2023-07-03T19:54:08.000Z | 2023-07-03T19:54:08 | ---
license: cc
pretty_name: HPCPerfOpt (HPC Performance Optimization Benchmark) # Example: SQuAD
# configs:
# - mcq-single
# - mcq-multiple
# - rodinia-chatgpt-mcq
# dataset_info:
# # features:
# # - name: {feature_name_0} # Example: id
# # dtype: {feature_dtype_0} # Example: int32
# # - name: {feature_name_1} # Example: text
# # dtype: {feature_dtype_1} # Example: string
# # - name: {feature_name_2} # Example: image
# # dtype: {feature_dtype_2} # Example: image
# # Example for SQuAD:
# # - name: id
# # dtype: string
# # - name: title
# # dtype: string
# # - name: context
# # dtype: string
# # - name: question
# # dtype: string
# # - name: answers
# # sequence:
# # - name: text
# # dtype: string
# # - name: answer_start
# # dtype: int32
# config_name: mcq-single # Example for glue: sst2
# splits:
# - name: test # Example: train
# # num_bytes: {split_num_bytes_0} # Example for SQuAD: 79317110
# # num_examples: {split_num_examples_0} # Example for SQuAD: 87599
# # download_size: {dataset_download_size} # Example for SQuAD: 35142551
# # dataset_size: {dataset_size} # Example for SQuAD: 89789763
# # - config_name: mcq-multiple
# # data_files:
# # - split: test
# # path: "mcq-multiple.csv"
# # - config_name: rodinia-chatgpt
# # data_files:
# # - split: test
# # path: "rodinia-chatgpt-mcq.csv"
# task_categories:
# - question-answering
# tags:
# - code
# size_categories:
# - n<1K
---
This dataset contains Multiple Choice question-answer pairs.
There are 3 test files separated on the basis of how they were created:
test1.csv manual data collection from tutorials, etc
test2.csv scraped profiling tool Codee documentation
test3.csv ChatGPT-generated-MCQ (need to update format and randomize answers.) | [
-0.6284098029136658,
-0.9370545148849487,
-0.016870751976966858,
0.10909927636384964,
0.04528691992163658,
0.21409641206264496,
0.12418073415756226,
0.04649979993700981,
0.29158321022987366,
0.9807033538818359,
-0.7988091707229614,
-0.18544219434261322,
0.016783658415079117,
0.253493458032... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
janPiljan/SaGIS | janPiljan | 2023-07-15T03:19:03Z | 75 | 1 | null | [
"task_categories:table-question-answering",
"size_categories:n<1K",
"language:en",
"license:mit",
"chemistry",
"biology",
"medical",
"general",
"region:us"
] | 2023-07-15T03:19:03Z | 2023-07-06T23:18:48.000Z | 2023-07-06T23:18:48 | ---
license: mit
task_categories:
- table-question-answering
language:
- en
tags:
- chemistry
- biology
- medical
- general
pretty_name: The Scientific and General Information (Data)Set
size_categories:
- n<1K
---
SaGIS: The Scientific and General Information (Data)Set.
The information stored in the dataset is information from OpenAI GPT 3.5-Turbo, Google PaLM, and Anthropic Claude (2). The information may not be entirely factual. | [
-0.5201176404953003,
-0.4979262351989746,
0.286735475063324,
-0.42842987179756165,
-0.03810033202171326,
-0.2273951768875122,
-0.15953810513019562,
-0.3088381588459015,
0.550703227519989,
0.49115660786628723,
-0.7134056687355042,
-0.12213657796382904,
-0.6182665824890137,
-0.19541205465793... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shortbread/tickers | shortbread | 2023-11-02T14:58:21Z | 75 | 0 | null | [
"size_categories:1K<n<10K",
"language:en",
"finance",
"region:us"
] | 2023-11-02T14:58:21Z | 2023-07-22T01:11:35.000Z | 2023-07-22T01:11:35 | ---
language:
- en
tags:
- finance
size_categories:
- 1K<n<10K
last_updated:
2023-07-20
---
Tickers
=======
| [
0.22545133531093597,
-0.16848821938037872,
0.4212608337402344,
0.3058108687400818,
-0.6629297733306885,
0.020885592326521873,
0.6687055230140686,
-0.547355592250824,
1.0095900297164917,
0.5234196186065674,
-0.1867983341217041,
-0.21067951619625092,
-0.7927426695823669,
-0.47165682911872864... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VanoInvestigations/BOE | VanoInvestigations | 2023-10-31T14:58:41Z | 75 | 0 | null | [
"region:us"
] | 2023-10-31T14:58:41Z | 2023-08-30T10:21:43.000Z | 2023-08-30T10:21:43 | ---
dataset_info:
features:
- name: boe_date_publication
dtype: string
- name: boe_previous
dtype: string
- name: boe_id
dtype: string
- name: boe_title
dtype: string
- name: boe_soup_xml
dtype: string
- name: tweet_date
dtype: string
- name: boe_text_cleaned
dtype: string
- name: tweet_original
dtype: string
- name: boe_alert
sequence: string
- name: boe_category
dtype: string
- name: boe_departament
dtype: string
- name: tweet_text_cleaned
dtype: string
- name: boe_subsequent
dtype: string
- name: boe_materials
sequence: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 179564833
num_examples: 2867
- name: validation
num_bytes: 19448449
num_examples: 392
- name: test
num_bytes: 22514673
num_examples: 389
download_size: 84281867
dataset_size: 221527955
---
# Dataset Card for "BOE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7090210914611816,
-0.32519420981407166,
0.12040463089942932,
0.2825080454349518,
-0.2404872179031372,
-0.12047968804836273,
0.40098831057548523,
-0.3373876214027405,
0.8868217468261719,
0.5291041731834412,
-0.705322265625,
-0.9171484112739563,
-0.5229863524436951,
-0.22375042736530304,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_tiiuae__falcon-180B | open-llm-leaderboard | 2023-10-24T10:18:04Z | 75 | 1 | null | [
"region:us"
] | 2023-10-24T10:18:04Z | 2023-09-05T08:24:35.000Z | 2023-09-05T08:24:35 | ---
pretty_name: Evaluation run of tiiuae/falcon-180B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [tiiuae/falcon-180B](https://huggingface.co/tiiuae/falcon-180B) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 66 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 32 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tiiuae__falcon-180B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-24T10:17:51.759984](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-180B/blob/main/results_2023-10-24T10-17-51.759984.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0028313758389261743,\n\
\ \"em_stderr\": 0.0005441551135493806,\n \"f1\": 0.06573301174496615,\n\
\ \"f1_stderr\": 0.0013666874377791776,\n \"acc\": 0.6642104078991223,\n\
\ \"acc_stderr\": 0.011605139145295384\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0028313758389261743,\n \"em_stderr\": 0.0005441551135493806,\n\
\ \"f1\": 0.06573301174496615,\n \"f1_stderr\": 0.0013666874377791776\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.45943896891584535,\n \
\ \"acc_stderr\": 0.01372709301042978\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8689818468823993,\n \"acc_stderr\": 0.009483185280160986\n\
\ }\n}\n```"
repo_url: https://huggingface.co/tiiuae/falcon-180B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|arc:challenge|25_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|arc:challenge|25_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|arc:challenge|25_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|arc:challenge|25_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|arc:challenge|25_2023-09-01T15:12:02.263774.parquet'
- split: 2023_09_25T09_30_46.601936
path:
- '**/details_harness|arc:challenge|25_2023-09-25T09-30-46.601936.parquet'
- split: 2023_09_25T09_42_43.006060
path:
- '**/details_harness|arc:challenge|25_2023-09-25T09-42-43.006060.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-25T09-42-43.006060.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_23T17_29_05.444286
path:
- '**/details_harness|drop|3_2023-10-23T17-29-05.444286.parquet'
- split: 2023_10_24T10_17_51.759984
path:
- '**/details_harness|drop|3_2023-10-24T10-17-51.759984.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T10-17-51.759984.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_23T17_29_05.444286
path:
- '**/details_harness|gsm8k|5_2023-10-23T17-29-05.444286.parquet'
- split: 2023_10_24T10_17_51.759984
path:
- '**/details_harness|gsm8k|5_2023-10-24T10-17-51.759984.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-24T10-17-51.759984.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hellaswag|10_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hellaswag|10_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hellaswag|10_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hellaswag|10_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hellaswag|10_2023-09-01T15:12:02.263774.parquet'
- split: 2023_09_25T11_16_10.146827
path:
- '**/details_harness|hellaswag|10_2023-09-25T11-16-10.146827.parquet'
- split: 2023_09_25T11_28_53.879118
path:
- '**/details_harness|hellaswag|10_2023-09-25T11-28-53.879118.parquet'
- split: 2023_09_25T13_20_00.898508
path:
- '**/details_harness|hellaswag|10_2023-09-25T13-20-00.898508.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-25T13-20-00.898508.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-01T15:12:02.263774.parquet'
- split: 2023_09_25T09_49_01.514206
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T09-49-01.514206.parquet'
- split: 2023_09_25T09_57_43.547983
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T09-57-43.547983.parquet'
- split: 2023_09_25T10_06_12.822356
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T10-06-12.822356.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T10-06-12.822356.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_23T17_29_05.444286
path:
- '**/details_harness|winogrande|5_2023-10-23T17-29-05.444286.parquet'
- split: 2023_10_24T10_17_51.759984
path:
- '**/details_harness|winogrande|5_2023-10-24T10-17-51.759984.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T10-17-51.759984.parquet'
- config_name: original_mmlu_5
data_files:
- split: 2023_09_21T14_54_28.631498
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T14-54-28.631498.parquet'
- split: 2023_09_21T15_14_19.361952
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T15-14-19.361952.parquet'
- split: 2023_09_22T15_08_20.868776
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-08-20.868776.parquet'
- split: 2023_09_22T15_09_58.434868
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-09-58.434868.parquet'
- split: 2023_09_22T15_40_03.532661
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-40-03.532661.parquet'
- split: 2023_09_22T19_13_36.680152
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-13-36.680152.parquet'
- split: 2023_09_22T19_25_51.687929
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-25-51.687929.parquet'
- split: 2023_09_22T19_38_30.055713
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-38-30.055713.parquet'
- split: 2023_09_22T19_56_14.188877
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-56-14.188877.parquet'
- split: 2023_09_22T20_44_00.745184
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T20-44-00.745184.parquet'
- split: 2023_09_22T21_16_36.510313
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-16-36.510313.parquet'
- split: 2023_09_22T21_30_38.663736
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-30-38.663736.parquet'
- split: 2023_09_22T21_39_07.387549
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-39-07.387549.parquet'
- split: 2023_09_22T21_46_48.392874
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-46-48.392874.parquet'
- split: 2023_09_22T22_06_13.624503
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-06-13.624503.parquet'
- split: 2023_09_22T22_21_06.865348
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-21-06.865348.parquet'
- split: 2023_09_23T09_44_24.946036
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- config_name: original_mmlu_high_school_government_and_politics_5
data_files:
- split: 2023_09_21T14_54_28.631498
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T14-54-28.631498.parquet'
- split: 2023_09_21T15_14_19.361952
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T15-14-19.361952.parquet'
- split: 2023_09_22T15_08_20.868776
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-08-20.868776.parquet'
- split: 2023_09_22T15_09_58.434868
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-09-58.434868.parquet'
- split: 2023_09_22T15_40_03.532661
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-40-03.532661.parquet'
- split: 2023_09_22T19_13_36.680152
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-13-36.680152.parquet'
- split: 2023_09_22T19_25_51.687929
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-25-51.687929.parquet'
- split: 2023_09_22T19_38_30.055713
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-38-30.055713.parquet'
- split: 2023_09_22T19_56_14.188877
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-56-14.188877.parquet'
- split: 2023_09_22T20_44_00.745184
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T20-44-00.745184.parquet'
- split: 2023_09_22T21_16_36.510313
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-16-36.510313.parquet'
- split: 2023_09_22T21_30_38.663736
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-30-38.663736.parquet'
- split: 2023_09_22T21_39_07.387549
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-39-07.387549.parquet'
- split: 2023_09_22T21_46_48.392874
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-46-48.392874.parquet'
- split: 2023_09_22T22_06_13.624503
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-06-13.624503.parquet'
- split: 2023_09_22T22_21_06.865348
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-21-06.865348.parquet'
- split: 2023_09_23T09_44_24.946036
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- config_name: results
data_files:
- split: 2023_09_21T14_54_28.631498
path:
- results_2023-09-21T14-54-28.631498.parquet
- split: 2023_09_21T15_14_19.361952
path:
- results_2023-09-21T15-14-19.361952.parquet
- split: 2023_09_22T15_08_20.868776
path:
- results_2023-09-22T15-08-20.868776.parquet
- split: 2023_09_22T15_09_58.434868
path:
- results_2023-09-22T15-09-58.434868.parquet
- split: 2023_09_22T15_40_03.532661
path:
- results_2023-09-22T15-40-03.532661.parquet
- split: 2023_09_22T19_13_36.680152
path:
- results_2023-09-22T19-13-36.680152.parquet
- split: 2023_09_22T19_25_51.687929
path:
- results_2023-09-22T19-25-51.687929.parquet
- split: 2023_09_22T19_38_30.055713
path:
- results_2023-09-22T19-38-30.055713.parquet
- split: 2023_09_22T19_56_14.188877
path:
- results_2023-09-22T19-56-14.188877.parquet
- split: 2023_09_22T20_44_00.745184
path:
- results_2023-09-22T20-44-00.745184.parquet
- split: 2023_09_22T21_16_36.510313
path:
- results_2023-09-22T21-16-36.510313.parquet
- split: 2023_09_22T21_30_38.663736
path:
- results_2023-09-22T21-30-38.663736.parquet
- split: 2023_09_22T21_39_07.387549
path:
- results_2023-09-22T21-39-07.387549.parquet
- split: 2023_09_22T21_46_48.392874
path:
- results_2023-09-22T21-46-48.392874.parquet
- split: 2023_09_22T22_06_13.624503
path:
- results_2023-09-22T22-06-13.624503.parquet
- split: 2023_09_22T22_21_06.865348
path:
- results_2023-09-22T22-21-06.865348.parquet
- split: 2023_09_23T09_44_24.946036
path:
- results_2023-09-23T09-44-24.946036.parquet
- split: 2023_09_25T09_30_46.601936
path:
- results_2023-09-25T09-30-46.601936.parquet
- split: 2023_09_25T09_42_43.006060
path:
- results_2023-09-25T09-42-43.006060.parquet
- split: 2023_09_25T09_49_01.514206
path:
- results_2023-09-25T09-49-01.514206.parquet
- split: 2023_09_25T09_57_43.547983
path:
- results_2023-09-25T09-57-43.547983.parquet
- split: 2023_09_25T10_06_12.822356
path:
- results_2023-09-25T10-06-12.822356.parquet
- split: 2023_09_25T11_16_10.146827
path:
- results_2023-09-25T11-16-10.146827.parquet
- split: 2023_09_25T11_28_53.879118
path:
- results_2023-09-25T11-28-53.879118.parquet
- split: 2023_09_25T13_20_00.898508
path:
- results_2023-09-25T13-20-00.898508.parquet
- split: 2023_10_23T17_29_05.444286
path:
- results_2023-10-23T17-29-05.444286.parquet
- split: 2023_10_24T10_17_51.759984
path:
- results_2023-10-24T10-17-51.759984.parquet
- split: latest
path:
- results_2023-10-24T10-17-51.759984.parquet
---
# Dataset Card for Evaluation run of tiiuae/falcon-180B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/tiiuae/falcon-180B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [tiiuae/falcon-180B](https://huggingface.co/tiiuae/falcon-180B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 66 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 32 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_tiiuae__falcon-180B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T10:17:51.759984](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-180B/blob/main/results_2023-10-24T10-17-51.759984.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493806,
"f1": 0.06573301174496615,
"f1_stderr": 0.0013666874377791776,
"acc": 0.6642104078991223,
"acc_stderr": 0.011605139145295384
},
"harness|drop|3": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493806,
"f1": 0.06573301174496615,
"f1_stderr": 0.0013666874377791776
},
"harness|gsm8k|5": {
"acc": 0.45943896891584535,
"acc_stderr": 0.01372709301042978
},
"harness|winogrande|5": {
"acc": 0.8689818468823993,
"acc_stderr": 0.009483185280160986
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.46643394231796265,
-0.6916022300720215,
0.21236221492290497,
0.23362459242343903,
-0.0521230511367321,
0.1636774241924286,
-0.3272736370563507,
-0.14520473778247833,
0.4996125400066376,
0.531146228313446,
-0.744456946849823,
-0.8958895802497864,
-0.6636559963226318,
0.18856821954250336,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mohamedemam/Arabic-samsum-dialogsum | mohamedemam | 2023-09-11T14:35:29Z | 75 | 1 | null | [
"task_categories:summarization",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:ar",
"license:cc-by-nc-2.0",
"arxiv:1911.12237",
"region:us"
] | 2023-09-11T14:35:29Z | 2023-09-11T12:48:44.000Z | 2023-09-11T12:48:44 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 27913254
num_examples: 24813
download_size: 13968520
dataset_size: 27913254
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nc-2.0
task_categories:
- summarization
- conversational
language:
- ar
pretty_name: ar messum
size_categories:
- 10K<n<100K
---
# Dataset Card for "Arabic-samsum-dialogsum"
this dataset is comption between samsum and dialogsum dataset translated in arabic
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://arxiv.org/abs/1911.12237v2
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Arabic
## Dataset Structure
t
### Data Instances
The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique id of an example.
### Data Splits
- train: 24732
## Dataset Creation
### Curation Rationale
In paper:
> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
### Source Data
#### Initial Data Collection and Normalization
In paper:
> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
#### Who are the source language producers?
linguists
### Annotations
#### Annotation process
In paper:
> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
#### Who are the annotators?
language experts
### Personal and Sensitive Information
None, see above: Initial Data Collection and Normalization
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
### Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
### Contributions
Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.426113486289978,
-0.7002061605453491,
0.0781545341014862,
0.07725271582603455,
-0.3637649416923523,
0.1791485697031021,
-0.22850503027439117,
-0.43956124782562256,
0.5563709735870361,
0.5516774654388428,
-0.7046219706535339,
-0.8233912587165833,
-0.5484094619750977,
0.21909812092781067,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vlsp-2023-vllm/hhh_alignment | vlsp-2023-vllm | 2023-10-30T03:32:46Z | 75 | 0 | null | [
"region:us"
] | 2023-10-30T03:32:46Z | 2023-09-15T17:17:32.000Z | 2023-09-15T17:17:32 | ---
dataset_info:
features:
- name: input
dtype: string
- name: targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int32
- name: metadata
struct:
- name: subset
dtype: string
splits:
- name: test
num_bytes: 285938
num_examples: 221
download_size: 66013
dataset_size: 285938
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# HHH-alignment
## Install
To install `lm-eval` from the github repository main branch, run:
```bash
git clone https://github.com/hieunguyen1053/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
## Basic Usage
> **Note**: When reporting results from eval harness, please include the task versions (shown in `results["versions"]`) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the [Task Versioning](#task-versioning) section for more info.
### Hugging Face `transformers`
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. vlsp-2023-vllm/hoa-1b4) on `hellaswag_vi` you can use the following command:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4 \
--tasks hhh_alignment_vi \
--batch_size auto \
--device cuda:0
```
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4,revision=step100000,dtype="float" \
--tasks hhh_alignment_vi \
--device cuda:0
```
To evaluate models that are loaded via `AutoSeq2SeqLM` in Huggingface, you instead use `hf-seq2seq`. *To evaluate (causal) models across multiple GPUs, use `--model hf-causal-experimental`*
> **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring. | [
-0.41189345717430115,
-0.8251420855522156,
0.6258109211921692,
0.34830233454704285,
-0.028352664783596992,
-0.14618931710720062,
0.0993025004863739,
-0.13043645024299622,
0.2266802340745926,
0.2490556389093399,
-0.7090579271316528,
-0.3684746026992798,
-0.5253242254257202,
-0.0016616086941... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bzantium/LongBench | bzantium | 2023-09-25T04:03:43Z | 75 | 0 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:zh",
"Long Context",
"arxiv:2308.14508",
"arxiv:2108.00573",
"... | 2023-09-25T04:03:43Z | 2023-09-21T06:13:03.000Z | 2023-09-21T06:13:03 | ---
task_categories:
- question-answering
- text-generation
- summarization
- conversational
- text-classification
language:
- en
- zh
tags:
- Long Context
size_categories:
- 1K<n<10K
---
# Introduction
**LongBench** is the first benchmark for bilingual, multitask, and comprehensive assessment of **long context understanding** capabilities of large language models. LongBench includes different languages (Chinese and English) to provide a more comprehensive evaluation of the large models' multilingual capabilities on long contexts. In addition, LongBench is composed of six major categories and twenty one different tasks, covering key long-text application scenarios such as single-document QA, multi-document QA, summarization, few-shot learning, synthetic tasks and code completion.
We are fully aware of the potentially high costs involved in the model evaluation process, especially in the context of long context scenarios (such as manual annotation costs or API call costs). Therefore, we adopt a fully automated evaluation method, aimed at measuring and evaluating the model's ability to understand long contexts at the lowest cost.
LongBench includes 14 English tasks, 5 Chinese tasks, and 2 code tasks, with the average length of most tasks ranging from 5k to 15k, and a total of 4,750 test data. For detailed statistics and construction methods of LongBench tasks, please refer [here](task.md). In addition, we provide LongBench-E, a test set with a more uniform length distribution constructed by uniform sampling, with comparable amounts of data in the 0-4k, 4k-8k, and 8k+ length intervals to provide an analysis of the model's performance variations at different input lengths.
Github Repo for LongBench: https://github.com/THUDM/LongBench
Arxiv Paper for LongBench: https://arxiv.org/pdf/2308.14508.pdf
# How to use it?
#### Loading Data
```python
from datasets import load_dataset
datasets = ["narrativeqa", "qasper", "multifieldqa_en", "multifieldqa_zh", "hotpotqa", "2wikimqa", "musique", \
"dureader", "gov_report", "qmsum", "multi_news", "vcsum", "trec", "triviaqa", "samsum", "lsht", \
"passage_count", "passage_retrieval_en", "passage_retrieval_zh", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', dataset, split='test')
```
Similarly, you can load the **LongBench-E** data
```python
from datasets import load_dataset
datasets = ["qasper", "multifieldqa_en", "hotpotqa", "2wikimqa", "gov_report", "multi_news", "trec", \
"triviaqa", "samsum", "passage_count", "passage_retrieval_en", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', f"{dataset}_e", split='test')
```
Alternatively, you can download the folder from [this link](https://huggingface.co/datasets/THUDM/LongBench/resolve/main/data.zip) to load the data.
#### Data Format
All data in **LongBench** (LongBench-E) are standardized to the following format:
```json
{
"input": "The input/command for the task, usually short, such as questions in QA, queries in Few-shot tasks, etc",
"context": "The long context required for the task, such as documents, cross-file code, few-shot examples in Few-shot tasks",
"answers": "A List of all true answers",
"length": "Total length of the first three items (counted in characters for Chinese and words for English)",
"dataset": "The name of the dataset to which this piece of data belongs",
"language": "The language of this piece of data",
"all_classes": "All categories in classification tasks, null for non-classification tasks",
"_id": "Random id for each piece of data"
}
```
#### Evaluation
This repository provides data download for LongBench. If you wish to use this dataset for automated evaluation, please refer to our [github](https://github.com/THUDM/LongBench).
# Task statistics
| Task | Task Type | Eval metric | Avg len |Language | \#Sample |
| :-------- | :-----------:| :-----------: |:-------: | :-----------: |:--------: |
| HotpotQA | Multi-doc QA | F1 |9,151 |EN |200 |
| 2WikiMultihopQA| Multi-doc QA | F1 |4,887 |EN |200 |
| MuSiQue| Multi-doc QA | F1 |11,214 |EN |200 |
| DuReader| Multi-doc QA | Rouge-L |15,768 |ZH |200 |
| MultiFieldQA-en| Single-doc QA | F1 |4,559 |EN |150 |
| MultiFieldQA-zh| Single-doc QA | F1 |6,701 |ZH |200 |
| NarrativeQA| Single-doc QA | F1 |18,409 |EN |200 |
| Qasper| Single-doc QA | F1 |3,619 |EN |200 |
| GovReport| Summarization | Rouge-L |8,734 |EN |200 |
| QMSum| Summarization | Rouge-L |10,614 |EN |200 |
| MultiNews| Summarization | Rouge-L |2,113 |EN |200 |
| VCSUM| Summarization | Rouge-L |15,380 |ZH |200 |
| TriviaQA| Few shot | F1 |8,209 |EN |200 |
| SAMSum| Few shot | Rouge-L |6,258 |EN |200 |
| TREC| Few shot | Accuracy |5,177 |EN |200 |
| LSHT| Few shot | Accuracy |22,337 |ZH |200 |
| PassageRetrieval-en| Synthetic | Accuracy |9,289 |EN |200 |
| PassageCount| Synthetic | Accuracy |11,141 |EN |200 |
| PassageRetrieval-zh | Synthetic | Accuracy |6,745 |ZH |200 |
| LCC| Code | Edit Sim |1,235 |Python/C#/Java |500 |
| RepoBench-P| Code | Edit Sim |4,206 |Python/Java |500 |
> Note: In order to avoid discrepancies caused by different tokenizers, we use the word count (using Python's split function) to calculate the average length of English datasets and code datasets, and use the character count to calculate the average length of Chinese datasets.
# Task description
| Task | Task Description |
| :---------------- | :----------------------------------------------------------- |
| HotpotQA | Answer related questions based on multiple given documents |
| 2WikiMultihopQA | Answer related questions based on multiple given documents |
| MuSiQue | Answer related questions based on multiple given documents |
| DuReader | Answer related Chinese questions based on multiple retrieved documents |
| MultiFieldQA-en | Answer English questions based on a long article, which comes from a relatively diverse field |
| MultiFieldQA-zh | Answer Chinese questions based on a long article, which comes from a relatively diverse field |
| NarrativeQA | Answer questions based on stories or scripts, including understanding of important elements such as characters, plots, themes, etc. |
| Qasper | Answer questions based on a NLP research paper, questions proposed and answered by NLP practitioners |
| GovReport | A summarization task that requires summarizing government work reports |
| MultiNews | A multi-doc summarization that requires summarizing over multiple news |
| QMSum | A summarization task that requires summarizing meeting records based on user queries |
| VCSUM | A summarization task that requires summarizing Chinese meeting records |
| SAMSum | A dialogue summarization task, providing several few-shot examples |
| TriviaQA | Single document question answering task, providing several few-shot examples |
| NQ | Single document question answering task, providing several few-shot examples |
| TREC | A classification task that requires categorizing questions, includes 50 categories in total |
| LSHT | A Chinese classification task that requires categorizing news, includes 24 categories in total |
| PassageRetrieval-en | Given 30 English Wikipedia paragraphs, determine which paragraph the given summary corresponds to |
| PassageCount | Determine the total number of different paragraphs in a given repetitive article |
| PassageRetrieval-zh | Given several Chinese paragraphs from the C4 data set, determine which paragraph the given abstract corresponds to |
| LCC | Given a long piece of code, predict the next line of code |
| RepoBench-P | Given code in multiple files within a GitHub repository (including cross-file dependencies), predict the next line of code |
# Task construction
> Note: For all tasks constructed from existing datasets, we use data from the validation or test set of the existing dataset (except for VCSUM).
- The tasks of [HotpotQA](https://hotpotqa.github.io/), [2WikiMultihopQA](https://aclanthology.org/2020.coling-main.580/), [MuSiQue](https://arxiv.org/abs/2108.00573), and [DuReader](https://github.com/baidu/DuReader) are built based on the original datasets and processed to be suitable for long context evaluation. Specifically, for questions in the validation set, we select the evidence passage that contains the answer and several distracting articles. These articles together with the original question constitute the input of the tasks.
- The tasks of MultiFiedQA-zh and MultiFieldQA-en consist of long artical data from about 10 sources, including Latex papers, judicial documents, government work reports, and PDF documents indexed by Google. For each long artical, we invite several PhD and master students to annotate, i.e., to ask questions based on the long artical and give the correct answers. To better automate evaluation, we ask the annotators to propose questions with definitive answers as much as possible.
- The tasks of [NarrativeQA](https://arxiv.org/pdf/1712.07040.pdf), [Qasper](https://arxiv.org/pdf/2105.03011.pdf), [GovReport](https://arxiv.org/pdf/2104.02112.pdf), [QMSum](https://arxiv.org/pdf/2104.05938.pdf) and [MultiNews](https://aclanthology.org/P19-1102.pdf) directly use the data provided by the original papers. In the specific construction, we use the template provided by [ZeroSCROLLS](https://www.zero.scrolls-benchmark.com/) to convert the corresponding data into pure text input.
- The [VCSUM](https://arxiv.org/abs/2305.05280) task is built based on the original dataset, and we design a corresponding template to convert the corresponding data into pure text input.
- The [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) task is constructed in the manner of [CoLT5](https://arxiv.org/abs/2303.09752), which provides several examples of question and answering based on documents, and requires the language model to answer related questions based on new documents.
- The tasks of [SAMSum](https://aclanthology.org/D19-5409.pdf), [TREC](https://aclanthology.org/C02-1150.pdf) and [LSHT](http://tcci.ccf.org.cn/conference/2014/dldoc/evatask6.pdf) are built based on the original datasets. For each question in the validation set, we sample several data from the training set to form few-shot examples. These examples together with the questions in the validation set constitute the input for this task.
- The PassageRetrieval-en task is constructed based on English Wikipedia. For each piece of data, we randomly sample 30 paragraphs from English Wikipedia and select one for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- The PassageCount task is constructed based on the English wiki. For each piece of data, we randomly sample several passages from English Wikipedia, repeat each paragraph at random several times, and finally shuffle the paragraphs. This task requires the model to determine the total number of different paragraphs in the given context.
- The PasskeyRetrieval-zh task is constructed based on [C4](https://arxiv.org/abs/1910.10683). For each piece of data, we randomly sample several Chinese paragraphs from C4 and select one of them for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- For the [LCC](https://arxiv.org/abs/2306.14893) task, we sample from the original code completion dataset. In the [RepoBench-P](https://arxiv.org/abs/2306.03091) task, we select the most challenging XF-F (Cross-File-First) setting from the original dataset and refer to the Oracle-Filled scenario in the paper. For each original piece of data, we randomly extract multiple cross-file code snippets, including the gold cross-file code snippet, and concatenate them as input, requiring the model to effectively use cross-file code for completion.
# LongBench-E statistics
| Task | Task Type | \#data in 0-4k | \#data in 4-8k | \#data in 8k+|
| :--------- | :-----------:| :-----------: |:---------: | :-------------: |
| HotpotQA | Multi-doc QA | 100 |100 |100 |
| 2WikiMultihopQA| Multi-doc QA | 100 |100 |100 |
| MultiFieldQA-en| Single-doc QA | 67 |70 |13 |
| Qasper| Single-doc QA | 100 |100 |24 |
| GovReport| Summarization | 100 |100 |100 |
| MultiNews| Summarization | 100 |100 |94 |
| TriviaQA| Few shot | 100 |100 |100 |
| SAMSum| Few shot | 100 |100 |100 |
| TREC| Few shot | 100 |100 |100 |
| PassageRetrieval-en| Synthetic | 100 |100 |100 |
| PassageCount| Synthetic | 100 |100 |100 |
| LCC| Code | 100 |100 |100 |
| RepoBench-P| Code | 100 |100 |100 |
# Citation
```
@misc{bai2023longbench,
title={LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding},
author={Yushi Bai and Xin Lv and Jiajie Zhang and Hongchang Lyu and Jiankai Tang and Zhidian Huang and Zhengxiao Du and Xiao Liu and Aohan Zeng and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
year={2023},
eprint={2308.14508},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.4589821696281433,
-0.8015419840812683,
0.4323573708534241,
0.5736246705055237,
-0.19175215065479279,
-0.08019518107175827,
-0.5159553289413452,
-0.6221718192100525,
0.3580498695373535,
0.3309979736804962,
-0.35195013880729675,
-0.9602940678596497,
-0.38479840755462646,
0.240938067436218... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rookshanks/dart | rookshanks | 2023-09-28T02:35:11Z | 75 | 0 | null | [
"region:us"
] | 2023-09-28T02:35:11Z | 2023-09-28T02:10:24.000Z | 2023-09-28T02:10:24 | ---
dataset_info:
features:
- name: context
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 15361709
num_examples: 62659
- name: validation
num_bytes: 1895789
num_examples: 6980
- name: test
num_bytes: 3429190
num_examples: 12552
download_size: 1145768
dataset_size: 20686688
---
# Dataset Card for "dart"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6025077700614929,
-0.10599955171346664,
0.10073763877153397,
-0.08930724114179611,
-0.23737850785255432,
0.30967313051223755,
0.478689581155777,
-0.2783714532852173,
0.920562744140625,
0.46757620573043823,
-0.8396074175834656,
-0.7355601787567139,
-0.5031929612159729,
-0.367377698421478... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xz97/MedInstruct | xz97 | 2023-10-24T01:27:45Z | 75 | 5 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2310.14558",
"region:us"
] | 2023-10-24T01:27:45Z | 2023-10-19T18:45:58.000Z | 2023-10-19T18:45:58 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: MedInstruct
task_categories:
- text-generation
size_categories:
- 10K<n<100K
---
# Dataset Card for MedInstruct
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/XZhang97666/AlpaCare
- **Paper:**
- **Point of Contact:** [Xinlu Zhang](xinluzhang@ucsb,edu)
### Dataset Summary
**MedInstruct** encompasses:
1. **MedInstruct-52k**: A dataset comprising 52,000 medical instructions and responses. Instructions are crafted by OpenAI's GPT-4 engine, and the responses are formulated by the GPT-3.5-turbo engine.
2. **MedInstruct-test**: A set of 217 clinical craft free-form instruction evaluation tests.
3. **med_seed**: The clinician-crafted seed set as a denomination to prompt GPT-4 for task generation.
**MedInstruct-52k** can be used to conduct instruction-tuning of language models, aiming to enhance their instruction-following capability and medical proficiency. \
On the other hand, **MedInstruct-test** can sever as an evaluator of models' instruction-following ability in the medical area.
Specifically, the **MedInstruct-52k** dataset was obtained by:
*Instructions*:
- Beginning with a medical task seed set that contains 167 clinical-craft clinical tasks.
- Utilizing the GPT-4 engine for instruction data generation.
- Crafting a [prompt](https://github.com/XZhang97666/AlpaCare/blob/master/prompt/instruction_prompt.txt) to serve as the medical instruction generation guidelines for GPT-4.
- To reduce costs in task generation and introduce greater randomness, 12 instructions are generated at once. This is followed by the selection of 3 random seed instructions from the seed set.
*Responses*:
- The GPT-3.5-turbo engine is used to synthesize responses, processing each instruction individually.
Here's an illustration of an instruction-response pair:
```json
{
instruction: "Rewrite this instruction on how to use an inhaler to be more patient-friendly and less technical."
input: "Exhale deeply, engage mouthpiece of inhaler, depress canister while simultaneously inhaling long and deep, hold the breath for ten seconds, then gently exhale"
output: "Here's a more patient-friendly version of the instruction on how to use an inhaler:
1. Take a deep breath and completely breathe out.
2. Put the mouthpiece of the inhaler into your mouth and seal your lips around it properly.
3. Press down the canister while breathing in slowly and deeply at the same time.
4. Hold your breath for ten seconds. 5. Finally, breathe out gently."
}
```
## Citation:
If you think it is a useful repo, please cite the paper:
```
@misc{zhang2023alpacareinstructiontuned,
title={AlpaCare:Instruction-tuned Large Language Models for Medical Application},
author={Xinlu Zhang and Chenxin Tian and Xianjun Yang and Lichang Chen and Zekun Li and Linda Ruth Petzold},
year={2023},
eprint={2310.14558},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.21410365402698517,
-0.7331055998802185,
0.4142311215400696,
-0.0599263571202755,
-0.3476823568344116,
-0.32665711641311646,
-0.17781448364257812,
-0.21961842477321625,
0.09517808258533478,
0.6027722358703613,
-0.7432982921600342,
-0.6352807879447937,
-0.5840276479721069,
-0.011995066888... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cartinoe5930/KoRAE_original | Cartinoe5930 | 2023-10-29T09:17:03Z | 75 | 0 | null | [
"region:us"
] | 2023-10-29T09:17:03Z | 2023-10-29T09:16:51.000Z | 2023-10-29T09:16:51 | ---
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 95068407
num_examples: 63724
download_size: 48931987
dataset_size: 95068407
---
# Dataset Card for "KoRAE_original_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.649861752986908,
-0.2992953956127167,
0.31774279475212097,
0.1648908108472824,
-0.4555474817752838,
-0.2405199408531189,
0.486117959022522,
-0.025079969316720963,
0.9854415655136108,
0.8283063173294067,
-0.7993906140327454,
-0.7539487481117249,
-0.6143315434455872,
-0.29904913902282715,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ProjectXMP/aiwingman-c-programming-v2-mistral | ProjectXMP | 2023-10-31T16:37:06Z | 75 | 0 | null | [
"region:us"
] | 2023-10-31T16:37:06Z | 2023-10-31T16:36:22.000Z | 2023-10-31T16:36:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ariesta/forensic-timeline3 | ariesta | 2023-11-08T20:44:00Z | 75 | 0 | null | [
"region:us"
] | 2023-11-08T20:44:00Z | 2023-11-05T10:51:37.000Z | 2023-11-05T10:51:37 | ---
configs:
- config_name: casper-rw
data_files: casper-rw.csv
- config_name: dfrws-2009-jhuisi
data_files: dfrws-2009-jhuisi.csv
- config_name: dfrws-2009-nssal
data_files: dfrws-2009-nssal.csv
- config_name: honeynet-challenge7
data_files: honeynet-challenge7.csv
- config_name: honeynet-challenge5
data_files: honeynet-challenge5.csv
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_alice_grader_first_1.0e | atmallen | 2023-11-16T18:27:00Z | 75 | 0 | null | [
"region:us"
] | 2023-11-16T18:27:00Z | 2023-11-16T03:18:57.000Z | 2023-11-16T03:18:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 17970044.0
num_examples: 200000
- name: validation
num_bytes: 1801418.0
num_examples: 20000
- name: test
num_bytes: 1802170.0
num_examples: 20000
download_size: 0
dataset_size: 21573632.0
---
# Dataset Card for "qm_alice__grader_first_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4499863386154175,
-0.30530616641044617,
0.11174028366804123,
0.1460753083229065,
-0.11588704586029053,
-0.041266366839408875,
0.7600756287574768,
0.1424066722393036,
0.6564674377441406,
0.32527250051498413,
-0.8262187242507935,
-1.000816822052002,
-0.7333019971847534,
-0.423617839813232... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlignmentLab-AI/gpt4andclaudechat | AlignmentLab-AI | 2023-11-17T05:30:26Z | 75 | 0 | null | [
"region:us"
] | 2023-11-17T05:30:26Z | 2023-11-17T05:30:14.000Z | 2023-11-17T05:30:14 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Xapien/vsums_enq_batch_2_uniform_sample | Xapien | 2023-11-24T15:56:44Z | 75 | 0 | null | [
"region:us"
] | 2023-11-24T15:56:44Z | 2023-11-24T15:56:41.000Z | 2023-11-24T15:56:41 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: subject_A
dtype: string
- name: entity_sourcetext_A
dtype: string
- name: entity_fingerprint_A
dtype: string
- name: DRE_A
dtype: string
- name: embedding_A
dtype: string
- name: new_entity_description_A
dtype: string
- name: new_embedding_A
dtype: string
- name: Label_A
dtype: int64
- name: subject_B
dtype: string
- name: entity_sourcetext_B
dtype: string
- name: entity_fingerprint_B
dtype: string
- name: DRE_B
dtype: string
- name: embedding_B
dtype: string
- name: new_entity_description_B
dtype: string
- name: new_embedding_B
dtype: string
- name: Label_B
dtype: int64
- name: new_similarity
dtype: float64
- name: old_similarity
dtype: float64
- name: same_persona
dtype: bool
splits:
- name: train
num_bytes: 102114904
num_examples: 1360
download_size: 8322975
dataset_size: 102114904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vsums_enq_batch_2_uniform_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44002026319503784,
-0.004685922991484404,
0.113078773021698,
0.14072927832603455,
-0.2389080822467804,
0.02246718667447567,
0.24007056653499603,
0.012036405503749847,
0.805086612701416,
0.7125164866447449,
-0.8319012522697449,
-0.6426294445991516,
-0.35522401332855225,
-0.20619714260101... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jamescalam/reddit-python | jamescalam | 2022-04-25T12:41:35Z | 74 | 2 | null | [
"region:us"
] | 2022-04-25T12:41:35Z | 2022-04-25T12:29:25.000Z | 2022-04-25T12:29:25 | # Python Subreddit
Dataset containing data scraped from the [Python subreddit](https://www.reddit.com/r/python). | [
-0.24703696370124817,
-0.6880742907524109,
0.1583278775215149,
0.5825814008712769,
-0.49765467643737793,
0.14348554611206055,
0.20898203551769257,
-0.0425456203520298,
0.759058952331543,
0.2625041604042053,
-0.9310688376426697,
-0.3801327645778656,
-0.3469691872596741,
0.10763759911060333,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adsabs/WIESP2022-NER | adsabs | 2023-05-17T19:42:32Z | 74 | 6 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-05-17T19:42:32Z | 2022-05-05T18:31:34.000Z | 2022-05-05T18:31:34 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'WIESP2022-NER'
size_categories:
- 1K<n<10K
source_datasets: []
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
## Dataset Description
Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
Datasets are in JSON Lines format (each line is a json dictionary).
The datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the "B-" and "I-" convention from the [IOB2 syntax]("https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)")
Each entry consists of a dictionary with the following keys:
- `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
- `"tokens"`: the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
- `"ner_tags"`: the list of NER tags (in IOB2 format)
The following keys are not strictly needed by the participants:
- `"ner_ids"`: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
## Instructions for Workshop participants:
How to load the data using the Huggingface library:
```python
from datasets import load_dataset
dataset = load_dataset("adsabs/WIESP2022-NER")
```
How to load the data if you cloned the repository locally:
(assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
- python (as list of dictionaries):
```python
import json
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
wiesp_dev_json = [json.loads(l) for l in list(f)]
```
- into Huggingface (as a Huggingface Dataset):
```python
from datasets import Dataset
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
```
How to compute your scores on the training data:
1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` functions (from the `.py` files with the same names).
Requirement to run the scoring scripts:
[NumPy](https://numpy.org/install/)
[scikit-learn](https://scikit-learn.org/stable/install.html)
[seqeval](https://github.com/chakki-works/seqeval#installation)
To get scores on the validation data, zip your predictions file (a single `.jsonl' file formatted following the same instructions as above) and upload the `.zip` file to the [Codalabs](https://codalab.lisn.upsaclay.fr/competitions/5062) competition.
## File list
```
├── WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
├── WIESP2022-NER-DEV.jsonl : 20 samples for development.
├── WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
├── WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
├── WIESP2022-NER-VALIDATION.jsonl : 1366 samples for validation
├── WIESP2022-NER-TESTING-NO-LABELS.jsonl : 2505 samples for testing without the NER labels. Used for the WIESP2022 workshop.
├── WIESP2022-NER-TESTING.jsonl : 2505 samples for testing
├── README.MD : this file.
├── tag_definitions.md : short descriptions and examples of the tags used in the task.
└── scoring-scripts/ : scripts used to evaluate submissions.
├── compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
└── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
```
## Cite as
[Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL)](https://aclanthology.org/2022.wiesp-1.1) (Grezes et al., WIESP 2022)
```python
@inproceedings{grezes-etal-2022-overview,
title = "Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature ({DEAL})",
author = "Grezes, Felix and
Blanco-Cuaresma, Sergi and
Allen, Thomas and
Ghosal, Tirthankar",
booktitle = "Proceedings of the first Workshop on Information Extraction from Scientific Publications",
month = "nov",
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wiesp-1.1",
pages = "1--7",
abstract = "In this article, we describe the overview of our shared task: Detecting Entities in the Astrophysics Literature (DEAL). The DEAL shared task was part of the Workshop on Information Extraction from Scientific Publications (WIESP) in AACL-IJCNLP 2022. Information extraction from scientific publications is critical in several downstream tasks such as identification of critical entities, article summarization, citation classification, etc. The motivation of this shared task was to develop a community-wide effort for entity extraction from astrophysics literature. Automated entity extraction would help to build knowledge bases, high-quality meta-data for indexing and search, and several other use-cases of interests. Thirty-three teams registered for DEAL, twelve of them participated in the system runs, and finally four teams submitted their system descriptions. We analyze their system and performance and finally discuss the findings of DEAL.",
}
``` | [
-0.6258004307746887,
-0.44885480403900146,
0.3118230104446411,
0.24371999502182007,
0.016866542398929596,
-0.06060292571783066,
-0.2371882051229477,
-0.6055193543434143,
0.57298344373703,
0.47929513454437256,
-0.4546528458595276,
-0.5854915976524353,
-0.672731339931488,
0.2118597775697708,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jamescalam/unsplash-25k-photos | jamescalam | 2022-09-13T13:02:46Z | 74 | 32 | null | [
"task_categories:image-to-image",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:text-to-image",
"task_categories:zero-shot-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n... | 2022-09-13T13:02:46Z | 2022-08-27T22:01:09.000Z | 2022-08-27T22:01:09 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Unsplash Lite 25K Photos
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- images
- unsplash
- photos
task_categories:
- image-to-image
- image-classification
- image-to-text
- text-to-image
- zero-shot-image-classification
task_ids: []
---
# Unsplash Lite Dataset Photos
This dataset is linked to the Unsplash Lite dataset containing data on 25K images from Unsplash. The dataset here only includes data from a single file `photos.tsv000`. The dataset builder script streams this data directly from the Unsplash 25K dataset source.
For full details, please see the [Unsplash Dataset GitHub repo](https://github.com/unsplash/datasets), or read the preview (copied from the repo) below.
---
# The Unsplash Dataset

The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning.
The Unsplash Dataset is offered in two datasets:
- the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches
- the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches
As the Unsplash library continues to grow, we’ll release updates to the dataset with new fields and new images, with each subsequent release being [semantically versioned](https://semver.org/).
We welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can [open an issue](https://github.com/unsplash/datasets/issues/new/choose) to report a problem or to let us know what you would like to see in the next release of the datasets.
For more on the Unsplash Dataset, see [our announcement](https://unsplash.com/blog/the-unsplash-dataset/) and [site](https://unsplash.com/data).
## Download
### Lite Dataset
The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md).
[⬇️ Download the Lite dataset](https://unsplash.com/data/lite/latest) [~650MB compressed, ~1.4GB raw]
### Full Dataset
The Full dataset is available for non-commercial usage and all uses must abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). To access, please go to [unsplash.com/data](https://unsplash.com/data) and request access. The dataset weighs ~20 GB compressed (~43GB raw)).
## Documentation
See the [documentation for a complete list of tables and fields](https://github.com/unsplash/datasets/blob/master/DOCS.md).
## Usage
You can follow these examples to load the dataset in these common formats:
- [Load the dataset in a PostgreSQL database](https://github.com/unsplash/datasets/tree/master/how-to/psql)
- [Load the dataset in a Python environment](https://github.com/unsplash/datasets/tree/master/how-to/python)
- [Submit an example doc](https://github.com/unsplash/datasets/blob/master/how-to/README.md#submit-an-example)
## Share your work
We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.
We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at [data@unsplash.com](mailto:data@unsplash.com).
If you're using the dataset in a research paper, you can attribute the dataset as `Unsplash Lite Dataset 1.2.0` or `Unsplash Full Dataset 1.2.0` and link to the permalink [`unsplash.com/data`](https://unsplash.com/data).
----
The Unsplash Dataset is made available for research purposes. [It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md). To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers).
 | [
-0.0454668290913105,
-0.2052856683731079,
0.13114304840564728,
0.0802626982331276,
-0.5599173307418823,
0.20941831171512604,
-0.2908553183078766,
-0.3149024546146393,
0.4329190254211426,
0.5678834319114685,
-0.6110532283782959,
-0.7500496506690979,
-0.11196775734424591,
0.12303251028060913... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
loubnabnl/humaneval_infilling | loubnabnl | 2022-10-21T10:37:13Z | 74 | 0 | null | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:code",
"license:mit",
"code-generation",
"arxiv:2207.14255",
"region:us"
] | 2022-10-21T10:37:13Z | 2022-10-06T16:47:01.000Z | 2022-10-06T16:47:01 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- code
license:
- mit
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: OpenAI HumanEval-Infilling
tags:
- code-generation
---
# HumanEval-Infilling
## Dataset Description
- **Repository:** https://github.com/openai/human-eval-infilling
- **Paper:** https://arxiv.org/pdf/2207.14255
## Dataset Summary
[HumanEval-Infilling](https://github.com/openai/human-eval-infilling) is a benchmark for infilling tasks, derived from [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark for the evaluation of code generation models.
## Dataset Structure
To load the dataset you need to specify a subset. By default `HumanEval-SingleLineInfilling` is loaded.
```python
from datasets import load_dataset
ds = load_dataset("humaneval_infilling", "HumanEval-RandomSpanInfilling")
DatasetDict({
test: Dataset({
features: ['task_id', 'entry_point', 'prompt', 'suffix', 'canonical_solution', 'test'],
num_rows: 1640
})
})
```
## Subsets
This dataset has 4 subsets: HumanEval-MultiLineInfilling, HumanEval-SingleLineInfilling, HumanEval-RandomSpanInfilling, HumanEval-RandomSpanInfillingLight.
The single-line, multi-line, random span infilling and its light version have 1033, 5815, 1640 and 164 tasks, respectively.
## Citation
```
@article{bavarian2022efficient,
title={Efficient Training of Language Models to Fill in the Middle},
author={Bavarian, Mohammad and Jun, Heewoo and Tezak, Nikolas and Schulman, John and McLeavey, Christine and Tworek, Jerry and Chen, Mark},
journal={arXiv preprint arXiv:2207.14255},
year={2022}
}
``` | [
-0.18848516047000885,
-0.5443242192268372,
-0.010887096635997295,
0.3382619023323059,
-0.10488325357437134,
-0.15255746245384216,
-0.38274529576301575,
-0.28929612040519714,
-0.25949546694755554,
0.5028152465820312,
-0.5299090147018433,
-0.45276713371276855,
-0.22898951172828674,
0.2627152... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
saibo/bookcorpus_deduplicated_small | saibo | 2022-12-29T16:14:32Z | 74 | 0 | null | [
"region:us"
] | 2022-12-29T16:14:32Z | 2022-12-28T18:55:18.000Z | 2022-12-28T18:55:18 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7321888
num_examples: 100000
download_size: 4495653
dataset_size: 7321888
---
# Dataset Card for "bookcorpus_deduplicated_small"
First 10K(0.25%) examples of [bookcorpus_deduplicated](https://huggingface.co/datasets/saibo/bookcorpus_deduplicated)
size: 7.4MB
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.482062965631485,
-0.21174271404743195,
-0.15247921645641327,
0.35872191190719604,
-0.4093405604362488,
-0.30973339080810547,
0.1954522728919983,
-0.17017243802547455,
0.4759047329425812,
0.671549916267395,
-0.787451982498169,
-0.554866373538971,
-0.5631469488143921,
-0.07135720551013947... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
roupenminassian/twitter-misinformation | roupenminassian | 2023-04-20T06:17:32Z | 74 | 0 | null | [
"task_categories:text-classification",
"region:us"
] | 2023-04-20T06:17:32Z | 2023-04-17T07:29:33.000Z | 2023-04-17T07:29:33 | ---
task_categories:
- text-classification
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.545617401599884,
-0.42588168382644653,
-0.051285725086927414,
0.38739174604415894,
-0.4620097875595093,
0.054228655993938446,
-0.24659407138824463,
-0.2884671688079834,
0.6999505162239075,
0.5781952142715454,
-0.9070088267326355,
-1.1513408422470093,
-0.7566764950752258,
0.0290524736046... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vencortex/DeOSAgentDocuments | vencortex | 2023-07-25T14:20:30Z | 74 | 0 | null | [
"region:us"
] | 2023-07-25T14:20:30Z | 2023-07-25T14:20:22.000Z | 2023-07-25T14:20:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: company_id
dtype: string
- name: context_id
dtype: string
- name: source
dtype: string
- name: date
dtype: string
- name: text
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 33884007
num_examples: 10000
download_size: 29585235
dataset_size: 33884007
---
# Dataset Card for "DeOSAgentDocuments"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.536270260810852,
-0.41393908858299255,
0.2934809625148773,
0.014654389582574368,
-0.3513612449169159,
0.04189880192279816,
0.3250080347061157,
-0.2753480076789856,
0.7923281192779541,
0.614837646484375,
-0.5925519466400146,
-0.9025070667266846,
-0.9362857341766357,
-0.0330527164041996,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tanganke/EuroSAT | tanganke | 2023-08-01T08:09:39Z | 74 | 0 | null | [
"task_categories:image-classification",
"region:us"
] | 2023-08-01T08:09:39Z | 2023-08-01T07:29:45.000Z | 2023-08-01T07:29:45 | ---
task_categories:
- image-classification
---
# EuroSAT
EuroSAT: Downloaded from https://github.com/phelber/EuroSAT (direct link: https://madm.dfki.de/files/sentinel/EuroSAT.zip).
For this dataset we randomly split the downloaded data into train/validation/test (21,600/2,700/2,700 samples). | [
-0.7995549440383911,
-0.4878806173801422,
0.4606375992298126,
0.24883534014225006,
-0.39248085021972656,
0.09233748912811279,
0.24491791427135468,
-0.25960662961006165,
0.2957835793495178,
0.39970630407333374,
-0.7016903162002563,
-0.5046842694282532,
-0.41364097595214844,
0.36841461062431... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RikoteMaster/Emotion_Recognition_4_llama2_chat | RikoteMaster | 2023-08-17T11:22:36Z | 74 | 0 | null | [
"region:us"
] | 2023-08-17T11:22:36Z | 2023-08-17T11:22:32.000Z | 2023-08-17T11:22:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Text_processed
dtype: string
- name: Emotion
dtype: string
- name: Augmented
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 28688912
num_examples: 61463
download_size: 8968276
dataset_size: 28688912
---
# Dataset Card for "Emotion_Recognition_4_llama2_chat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.48506253957748413,
-0.08887143433094025,
0.2716863751411438,
0.544426441192627,
-0.3869888186454773,
0.2029588222503662,
0.21561186015605927,
-0.42102035880088806,
0.9489617347717285,
0.21091587841510773,
-0.7858179211616516,
-0.690533459186554,
-0.7659794688224792,
-0.06024294719099998... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jackmin108/c4-en-validation | Jackmin108 | 2023-08-18T22:00:10Z | 74 | 0 | null | [
"region:us"
] | 2023-08-18T22:00:10Z | 2023-08-18T21:59:09.000Z | 2023-08-18T21:59:09 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vipulgupta/CALM | vipulgupta | 2023-11-09T03:08:48Z | 74 | 1 | null | [
"task_categories:question-answering",
"size_categories:50k<n<100k",
"language:en",
"license:mit",
"arxiv:2308.12539",
"region:us"
] | 2023-11-09T03:08:48Z | 2023-08-23T23:49:51.000Z | 2023-08-23T23:49:51 | ---
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- 50k<n<100k
pretty_name: CALM
configs:
- config_name: default
data_files:
- split: test
path:
- "data/gender_datasets/*.jsonl"
- "data/race_datasets/*.jsonl"
---
# Dataset Card for Dataset Name
## Dataset Description
This contains dataset for a multi-task dataset for measuring language model bias : [CALM](https://arxiv.org/abs/2308.12539)
This dataset is intended towards reliable deployment of language models by measuring issues of gender and race bias in model's prediction.
The dataset contains 3 task : question answering, sentiment classification, and natural language inference.
The dataset was collected by converting 16 exisiting datasets into templates.
- Repository: [CALM](https://github.com/vipulgupta1011/CALM)
## Dataset Structure
Each line in the downloaded data file is a json dict containing the data id and dialogue data in a list format. Below is an example line.
```
{
"context": "On the same day, Ed Miliband had announced he would stand down as Labour leader, while Raylan said he would resign as leader of the Liberal Democrats.",
"question": "What is Raylan leader of?",
"source_dataset": "qamr",
"gender": "male"
}
```
### Citation Information
```bibtex
@article{gupta2023calm,
title={CALM: A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias},
author={Gupta, Vipul and Venkit, Pranav Narayanan and Lauren{\c{c}}on, Hugo and Wilson, Shomir and Passonneau, Rebecca J},
journal={arXiv preprint arXiv:2308.12539},
year={2023}
}
```
| [
-0.33914411067962646,
-0.6431447267532349,
-0.13226526975631714,
0.3131178915500641,
-0.010233319364488125,
-0.03804388269782066,
-0.5139840841293335,
0.04591779783368111,
-0.15589609742164612,
0.13439346849918365,
-0.851036787033081,
-0.5636140704154968,
-0.4453869163990021,
0.02849632874... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexcadillon/SemEval2014Task4 | alexcadillon | 2023-09-12T08:49:29Z | 74 | 0 | null | [
"region:us"
] | 2023-09-12T08:49:29Z | 2023-08-24T13:07:51.000Z | 2023-08-24T13:07:51 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
owkin/camelyon16-features | owkin | 2023-10-30T11:20:51Z | 74 | 0 | null | [
"task_categories:feature-extraction",
"task_categories:image-classification",
"size_categories:n<1K",
"language:en",
"license:other",
"biology",
"medical",
"cancer",
"region:us"
] | 2023-10-30T11:20:51Z | 2023-09-29T15:26:47.000Z | 2023-09-29T15:26:47 | ---
dataset_info:
features:
- name: features
sequence:
sequence: float32
- name: label
dtype: int64
splits:
- name: Phikon_test
num_bytes: 401342744
num_examples: 130
- name: Phikon_train
num_bytes: 808932620
num_examples: 269
download_size: 1210840794
dataset_size: 1210275364
configs:
- config_name: default
data_files:
- split: Phikon_test
path: data/Phikon_test-*
- split: Phikon_train
path: data/Phikon_train-*
license: other
task_categories:
- feature-extraction
- image-classification
language:
- en
tags:
- biology
- medical
- cancer
pretty_name: Camelyon16 Features
size_categories:
- n<1K
---
# Dataset Card for Camelyon16-features
### Dataset Summary
The Camelyon16 dataset is a very popular benchmark dataset used in the field of cancer classification.

The dataset we've uploaded here is the result of features extracted from the Camelyon16 dataset using the Phikon model, which is also openly available on Hugging Face.
## Dataset Creation
### Initial Data Collection and Normalization
The initial collection of the Camelyon16 Whole Slide Images is credited to:
Radboud University Medical Center (Nijmegen, the Netherlands),
University Medical Center Utrecht (Utrecht, the Netherlands).
### Licensing Information
This dataset is under [Owkin non-commercial license](https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt).
### Citation Information
Owkin claims no ownership of this dataset. This is simply an extraction of features from the original dataset.
[Link to original dataset](https://camelyon16.grand-challenge.org/) [Link to original paper](https://jamanetwork.com/journals/jama/fullarticle/2665774) | [
-0.4516352713108063,
-0.14988209307193756,
-0.01779814437031746,
0.004012931603938341,
-0.3734695613384247,
-0.2391263246536255,
0.004278129898011684,
-0.44445207715034485,
0.13438217341899872,
0.6478524208068848,
-0.6724237203598022,
-1.003358006477356,
-0.5030271410942078,
-0.02935665473... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
glaiveai/glaive-code-assistant-v2 | glaiveai | 2023-10-05T15:18:52Z | 74 | 15 | null | [
"region:us"
] | 2023-10-05T15:18:52Z | 2023-10-05T14:48:45.000Z | 2023-10-05T14:48:45 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_ehartford__Samantha-1.1-70b | open-llm-leaderboard | 2023-10-19T15:48:05Z | 74 | 0 | null | [
"region:us"
] | 2023-10-19T15:48:05Z | 2023-10-19T15:47:56.000Z | 2023-10-19T15:47:56 | ---
pretty_name: Evaluation run of ehartford/Samantha-1.1-70b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ehartford/Samantha-1.1-70b](https://huggingface.co/ehartford/Samantha-1.1-70b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ehartford__Samantha-1.1-70b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-19T15:47:52.190208](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__Samantha-1.1-70b/blob/main/results_2023-10-19T15-47-52.190208.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.5110109060402684,\n\
\ \"em_stderr\": 0.00511922622901773,\n \"f1\": 0.5558829697986593,\n\
\ \"f1_stderr\": 0.004885906429154466,\n \"acc\": 0.5744121045517131,\n\
\ \"acc_stderr\": 0.011649119740139782\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.5110109060402684,\n \"em_stderr\": 0.00511922622901773,\n\
\ \"f1\": 0.5558829697986593,\n \"f1_stderr\": 0.004885906429154466\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3161485974222896,\n \
\ \"acc_stderr\": 0.012807630673451488\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8326756116811366,\n \"acc_stderr\": 0.010490608806828079\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ehartford/Samantha-1.1-70b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_19T15_47_52.190208
path:
- '**/details_harness|drop|3_2023-10-19T15-47-52.190208.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-19T15-47-52.190208.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_19T15_47_52.190208
path:
- '**/details_harness|gsm8k|5_2023-10-19T15-47-52.190208.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-19T15-47-52.190208.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_19T15_47_52.190208
path:
- '**/details_harness|winogrande|5_2023-10-19T15-47-52.190208.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-19T15-47-52.190208.parquet'
- config_name: results
data_files:
- split: 2023_10_19T15_47_52.190208
path:
- results_2023-10-19T15-47-52.190208.parquet
- split: latest
path:
- results_2023-10-19T15-47-52.190208.parquet
---
# Dataset Card for Evaluation run of ehartford/Samantha-1.1-70b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ehartford/Samantha-1.1-70b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [ehartford/Samantha-1.1-70b](https://huggingface.co/ehartford/Samantha-1.1-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ehartford__Samantha-1.1-70b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-19T15:47:52.190208](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__Samantha-1.1-70b/blob/main/results_2023-10-19T15-47-52.190208.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.5110109060402684,
"em_stderr": 0.00511922622901773,
"f1": 0.5558829697986593,
"f1_stderr": 0.004885906429154466,
"acc": 0.5744121045517131,
"acc_stderr": 0.011649119740139782
},
"harness|drop|3": {
"em": 0.5110109060402684,
"em_stderr": 0.00511922622901773,
"f1": 0.5558829697986593,
"f1_stderr": 0.004885906429154466
},
"harness|gsm8k|5": {
"acc": 0.3161485974222896,
"acc_stderr": 0.012807630673451488
},
"harness|winogrande|5": {
"acc": 0.8326756116811366,
"acc_stderr": 0.010490608806828079
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.31482377648353577,
-0.6456481218338013,
0.3056657314300537,
0.12015077471733093,
-0.15303240716457367,
0.05782052502036095,
-0.2759467661380768,
-0.16784046590328217,
0.47024473547935486,
0.551142692565918,
-0.7634868621826172,
-1.029585361480713,
-0.6739263534545898,
0.2056409567594528... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuhsinchan/nmsqa_full-dev_test | yuhsinchan | 2023-11-01T12:12:38Z | 74 | 0 | null | [
"region:us"
] | 2023-11-01T12:12:38Z | 2023-11-01T10:27:53.000Z | 2023-11-01T10:27:53 | ---
dataset_info:
features:
- name: case_id
dtype: string
- name: context_code
sequence: int16
- name: context_cnt
sequence: int16
- name: question_code
sequence: int16
- name: question_cnt
sequence: int16
- name: start_idx
dtype: int64
- name: end_idx
dtype: int64
- name: start_time
dtype: float64
- name: end_time
dtype: float64
splits:
- name: dev
num_bytes: 102442544
num_examples: 17155
- name: test
num_bytes: 2316076
num_examples: 267
download_size: 0
dataset_size: 104758620
---
# Dataset Card for "nmsqa_full-dev_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.717428982257843,
-0.2122969627380371,
0.10849366337060928,
0.31565549969673157,
-0.22781065106391907,
0.17766837775707245,
0.21574778854846954,
0.14797060191631317,
0.991051971912384,
0.49626424908638,
-0.8759254813194275,
-0.6924660205841064,
-0.3483065962791443,
-0.07713331282138824,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rajammanabrolu/ultra-split | rajammanabrolu | 2023-11-01T21:02:41Z | 74 | 0 | null | [
"region:us"
] | 2023-11-01T21:02:41Z | 2023-11-01T21:02:13.000Z | 2023-11-01T21:02:13 | ---
dataset_info:
features:
- name: source
dtype: string
- name: instruction
dtype: string
- name: models
sequence: string
- name: completions
list:
- name: annotations
struct:
- name: instruction_following
struct:
- name: Rating
dtype: string
- name: Rationale
dtype: string
- name: honesty
struct:
- name: Rating
dtype: string
- name: Rationale
dtype: string
- name: truthfulness
struct:
- name: Type
sequence: string
- name: Rationale
dtype: string
- name: Rating
dtype: string
- name: Rationale For Rating
dtype: string
- name: helpfulness
struct:
- name: Type
sequence: string
- name: Rationale
dtype: string
- name: Rating
dtype: string
- name: Rationale For Rating
dtype: string
- name: custom_system_prompt
dtype: string
- name: model
dtype: string
- name: principle
dtype: string
- name: response
dtype: string
- name: critique
dtype: string
- name: overall_score
dtype: float64
- name: correct_answers
sequence: string
- name: incorrect_answers
sequence: string
splits:
- name: train
num_bytes: 757820544.4612066
num_examples: 57570
- name: test
num_bytes: 84206670.53879344
num_examples: 6397
download_size: 333284347
dataset_size: 842027215.0
---
# Dataset Card for "ultra-split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7243611812591553,
-0.4432980716228485,
0.17208558320999146,
0.09255102276802063,
-0.3773355484008789,
0.30667927861213684,
0.33263128995895386,
-0.23795215785503387,
0.9092908501625061,
0.5667641758918762,
-0.8390697240829468,
-0.5130888223648071,
-0.27700793743133545,
-0.31740939617156... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ramitha/spanish-legal-data-lite | Ramitha | 2023-11-03T15:47:07Z | 74 | 0 | null | [
"region:us"
] | 2023-11-03T15:47:07Z | 2023-11-02T22:01:55.000Z | 2023-11-02T22:01:55 | ---
dataset_info:
features:
- name: Data
dtype: string
splits:
- name: train
num_bytes: 122971
num_examples: 501
download_size: 62737
dataset_size: 122971
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "spanish-legal-data-lite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3879665732383728,
-0.3332202732563019,
0.17280109226703644,
0.43818429112434387,
-0.4529005289077759,
0.07642176747322083,
0.002616190817207098,
-0.2152356207370758,
0.9925124049186707,
0.6747947335243225,
-0.7305240035057068,
-0.916165292263031,
-0.5136233568191528,
-0.2206638604402542... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
justinphan3110/harmful_harmless_instructions_vicuna | justinphan3110 | 2023-11-14T06:23:34Z | 74 | 0 | null | [
"region:us"
] | 2023-11-14T06:23:34Z | 2023-11-14T06:23:31.000Z | 2023-11-14T06:23:31 | ---
dataset_info:
features:
- name: sentence
sequence: string
- name: label
sequence: bool
splits:
- name: train
num_bytes: 65180
num_examples: 128
- name: test
num_bytes: 196420
num_examples: 384
download_size: 65925
dataset_size: 261600
---
# Dataset Card for "harmful_harmless_instructions_vicuna"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.25841212272644043,
-0.6712226271629333,
0.27206146717071533,
0.33611351251602173,
-0.4563228487968445,
-0.5741811394691467,
0.41489267349243164,
-0.16336731612682343,
0.4690316617488861,
0.9027524590492249,
-0.8074671030044556,
-1.080893635749817,
-0.5183330774307251,
-0.399619281291961... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_bob_grader_first_1.0e | atmallen | 2023-11-16T18:27:25Z | 74 | 0 | null | [
"region:us"
] | 2023-11-16T18:27:25Z | 2023-11-16T03:19:44.000Z | 2023-11-16T03:19:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 17970044.0
num_examples: 200000
- name: validation
num_bytes: 1801418.0
num_examples: 20000
- name: test
num_bytes: 1802170.0
num_examples: 20000
download_size: 0
dataset_size: 21573632.0
---
# Dataset Card for "qm_bob__grader_first_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5563878417015076,
-0.32037898898124695,
0.013650142587721348,
0.2895301878452301,
-0.25519031286239624,
0.14933721721172333,
0.5901095867156982,
0.22637060284614563,
0.7154656052589417,
0.48285865783691406,
-0.8433777689933777,
-1.0353784561157227,
-0.6953661441802979,
-0.46038293838500... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sam-mosaic/iv4-chatml | sam-mosaic | 2023-11-21T21:20:43Z | 74 | 0 | null | [
"region:us"
] | 2023-11-21T21:20:43Z | 2023-11-21T21:19:11.000Z | 2023-11-21T21:19:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2349114457.0
num_examples: 387277
- name: test
num_bytes: 351904407.0
num_examples: 57556
download_size: 1361629459
dataset_size: 2701018864.0
---
# Dataset Card for "iv4-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6265166401863098,
-0.16552983224391937,
0.16494962573051453,
0.2887958586215973,
-0.04911763221025467,
0.0798918604850769,
0.2294652909040451,
-0.2518695592880249,
0.6689274311065674,
0.5425973534584045,
-0.8187330365180969,
-0.8389539122581482,
-0.5111370086669922,
-0.00408631842583417... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
patrickvonplaten/scientific_papers_dummy | patrickvonplaten | 2021-02-05T16:05:47Z | 73 | 0 | null | [
"region:us"
] | 2021-02-05T16:05:47Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.