id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
fiveflow/for_align | 2023-10-08T04:59:06.000Z | [
"region:us"
] | fiveflow | null | null | 0 | 52 | 2023-10-08T04:37:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 35614538
num_examples: 17281
- name: test
num_bytes: 3992474
num_examples: 1915
download_size: 22211168
dataset_size: 39607012
---
# Dataset Card for "for_align"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 661 | [
[
-0.030426025390625,
-0.0146331787109375,
0.0192718505859375,
0.0175323486328125,
-0.00812530517578125,
-0.0133056640625,
0.01532745361328125,
0.005489349365234375,
0.050506591796875,
0.01551055908203125,
-0.06329345703125,
-0.052398681640625,
-0.0277862548828125... |
YuyangHuang/amazonReviewSummary | 2023-11-01T02:13:46.000Z | [
"region:us"
] | YuyangHuang | null | null | 0 | 52 | 2023-10-09T08:17:27 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
pbaoo2705/cpgqa_processed_eval | 2023-10-16T06:02:20.000Z | [
"region:us"
] | pbaoo2705 | null | null | 0 | 52 | 2023-10-10T06:53:20 | ---
dataset_info:
features:
- name: title
dtype: string
- name: id
dtype: int64
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: context
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: answer
dtype: string
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: validation
num_bytes: 1212109
num_examples: 104
download_size: 35223
dataset_size: 1212109
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "cpgqa_processed_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 847 | [
[
-0.03692626953125,
-0.036285400390625,
0.0264434814453125,
0.0181427001953125,
-0.01373291015625,
0.0232086181640625,
0.0095977783203125,
0.0048065185546875,
0.037261962890625,
0.04266357421875,
-0.045318603515625,
-0.055328369140625,
-0.036895751953125,
-0.... |
marcus2000/timelist_dataset | 2023-10-11T07:53:19.000Z | [
"region:us"
] | marcus2000 | null | null | 0 | 52 | 2023-10-11T07:53:17 | ---
configs:
- config_name: default
data_files:
- split: summary
path: data/summary-*
- split: task
path: data/task-*
dataset_info:
features:
- name: original
dtype: string
- name: protocol
dtype: string
- name: edited_protocol
dtype: string
splits:
- name: summary
num_bytes: 1141876
num_examples: 111
- name: task
num_bytes: 396043
num_examples: 111
download_size: 728443
dataset_size: 1537919
---
# Dataset Card for "timelist_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 630 | [
[
-0.032684326171875,
-0.009521484375,
0.00801849365234375,
0.03033447265625,
-0.02197265625,
0.007411956787109375,
0.0247344970703125,
-0.017364501953125,
0.055816650390625,
0.016387939453125,
-0.0595703125,
-0.041046142578125,
-0.034698486328125,
-0.02465820... |
fiveflow/passage_rationale | 2023-10-17T02:14:24.000Z | [
"region:us"
] | fiveflow | null | null | 0 | 52 | 2023-10-16T06:00:22 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 269246
num_examples: 47
download_size: 87592
dataset_size: 269246
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "passage_rationale"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 472 | [
[
-0.03057861328125,
-0.0294036865234375,
0.0465087890625,
0.0228729248046875,
-0.0250396728515625,
-0.02178955078125,
0.023834228515625,
0.005069732666015625,
0.0419921875,
0.0496826171875,
-0.0677490234375,
-0.05377197265625,
-0.030792236328125,
-0.014846801... |
ZenMoore/RoleBench | 2023-10-19T09:33:57.000Z | [
"language:zh",
"language:en",
"license:apache-2.0",
"Role-Playing",
"Instruction",
"arxiv:2310.00746",
"region:us"
] | ZenMoore | null | null | 18 | 52 | 2023-10-19T08:54:01 | ---
language:
- zh
- en
pretty_name: "RoleBench"
tags:
- Role-Playing
- Instruction
license: "apache-2.0"
---
# RoleBench
- Paper Title: RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models
- arXiv Link: https://arxiv.org/abs/2310.00746
- Github Repo: https://github.com/InteractiveNLP-Team/RoleLLM-public
Please read our paper for more details about this dataset.
TL;DR: We introduce RoleLLM, a role-playing framework of data construction and evaluation (RoleBench), as well as solutions for both closed-source and open-source models (RoleGPT, RoleLLaMA, RoleGLM). We also propose Context-Instruct for long-text knowledge extraction and role-specific knowledge injection.
---
# List of Roles

Abraham Lincoln, Alvy Singer, Andrew Detmer, Angel, Antonio Salieri, Bai Li (李白,Chinese), Benjamin Button, Blair Waldorf, Bruno Antony, Caden Cotard, Caesar, Coach Eric Taylor, Colonel Hans Landa, Colonel Nathan R. Jessep, Coriolanus, D_Artagnan, David Aames, Doctor Who, Dr. Frank N Furter, Dr. Hannibal Lecter, Emperor (《甄嬛传》皇帝,Chinese), Fei Zhang (张飞,Chinese), Fletcher Reede, Frank T.J. Mackey, Fred Flintstone, Freddy Krueger, Gaston, Gregory House, HAL 9000, Harvey Milk, Imperial Concubine Hua (《甄嬛传》华妃,Chinese), Jack, Jack Sparrow, Jack Torrance, Jackie Moon, James Bond, James Brown, James Carter, Jeff Spicoli, Jigsaw, Jim Morrison, John Coffey, John Dillinger, John Doe, John Keating, Jordan Belfort, Judge Dredd, Judy Hoops, Juno MacGuff, Karl Childers, Klaus Mikaelson, Leonard Shelby, Leroy Jethro Gibbs, Lestat de Lioncourt, Logan, Lucifer Morningstar, Lyn Cassady, Malcolm X, Mark Renton, Mary Sibley, Mater, Michael Scott, Murphy MacManus, Oliver Queen, Pat Solitano, Paul Conroy, Paul Vitti, Peter Parker, Po, Professor G.H. Dorr, Queen Catherine, Queen Elizabeth I, Rachel Lang, Randle McMurphy, Raylan Givens, Robert Angier, Rorschach, Seth, Sheldon Cooper, Sherlock Holmes, Shrek, Sonny, Stanley Ipkiss, Stephen Hawking, Stifler, The Dude, Theodore Twombly, Thor, Tom Ripley, Travis Bickle, Truman Capote, Tugg Speedman, Twilight Sparkle, Tyler Hawkins, Tyrion Lannister, Violet Weston, Wade Wilson, Walt Kowalski, Willie Soke, Wukong Sun (《西游记》孙悟空,Chinese).
---
# Non-Cherry-Picked Demonstrations




---
# Statistics


---
# Download
```bash
git lfs install
git clone https://huggingface.co/datasets/ZenMoore/RoleBench
```
```python
from datasets import load_dataset
dataset = load_dataset("ZenMoore/RoleBench")
```
---
# File Structure
- `instructions-eng`: Contains English Instructions (both general and role-specific ones). `nums.jsonl` indicates the number of role-specific instructions for each role, while `split_info.txt` records how many segments each role's script can be divided into during the Context-Instruct.
- `instructions-zh`: Similarly for Chinese.
- `profiles-eng`: Contains the description file `desc.json` for all roles, dialogue data files `profiles-eng-{role_name}.jsonl` for each role, and the script names in `scripts.json`.
- `profiles-zh`: Similarly for Chinese.
- `rolebench-eng/instruction-generalization`, `rolebench-eng/role-generalization`, and `rolebench-zh`: All contain two subfolders: `general` and `role_specific`. Each subfolder has training data, testing data, and the RoleGPT baseline results for comparison.
---
# License
Apache 2.0 License.
---
# Citation
Feel free to cite us if you like RoleBench and RoleLLM.
```bibtex
@article{wang2023rolellm,
title = {RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models},
author = {Zekun Moore Wang and Zhongyuan Peng and Haoran Que and Jiaheng Liu and Wangchunshu Zhou and Yuhan Wu and Hongcheng Guo and Ruitong Gan and Zehao Ni and Man Zhang and Zhaoxiang Zhang and Wanli Ouyang and Ke Xu and Wenhu Chen and Jie Fu and Junran Peng},
year = {2023},
journal = {arXiv preprint arXiv: 2310.00746}
}
```
```bibtex
@article{wang2023interactive,
title={Interactive Natural Language Processing},
author={Wang, Zekun and Zhang, Ge and Yang, Kexin and Shi, Ning and Zhou, Wangchunshu and Hao, Shaochun and Xiong, Guangzheng and Li, Yizhi and Sim, Mong Yuan and Chen, Xiuying and others},
journal={arXiv preprint arXiv:2305.13246},
year={2023}
}
``` | 4,931 | [
[
-0.03558349609375,
-0.038360595703125,
0.01358795166015625,
0.016632080078125,
-0.0033245086669921875,
-0.0011348724365234375,
-0.026275634765625,
-0.033172607421875,
0.004100799560546875,
0.028564453125,
-0.034881591796875,
-0.0535888671875,
-0.0222320556640625... |
Ghadiii/Pairs | 2023-10-24T15:04:15.000Z | [
"region:us"
] | Ghadiii | null | null | 0 | 52 | 2023-10-24T15:01:54 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
coastalcph/fm_classifier-1-1 | 2023-11-01T16:47:12.000Z | [
"region:us"
] | coastalcph | null | null | 0 | 52 | 2023-11-01T16:46:53 | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 441311.11472868215
num_examples: 2332
- name: all_fm
num_bytes: 33865262.26303366
num_examples: 177265
- name: validation
num_bytes: 247145.92089728452
num_examples: 1355
- name: test
num_bytes: 421144.09185230394
num_examples: 2669
download_size: 5867579
dataset_size: 34974863.39051193
---
# Dataset Card for "fm_classifier-1-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 885 | [
[
-0.04656982421875,
-0.01541900634765625,
0.01055145263671875,
0.017425537109375,
-0.017608642578125,
-0.017120361328125,
0.0223541259765625,
-0.0058746337890625,
0.049591064453125,
0.01554107666015625,
-0.06671142578125,
-0.052520751953125,
-0.04754638671875,
... |
NbAiLab/norne | 2022-11-07T12:41:46.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:no",
"license:other"... | NbAiLab | NorNE is a manually annotated
corpus of named entities which extends the annotation of the existing
Norwegian Dependency Treebank. Comprising both of the official standards of
written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000
tokens and annotates a rich set of entity types including persons,
organizations, locations, geo-political entities, products, and events,
in addition to a class corresponding to nominals derived from names. | @inproceedings{johansen2019ner,
title={NorNE: Annotating Named Entities for Norwegian},
author={Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg,
Lilja Øvrelid, and Erik Velldal},
booktitle={LREC 2020},
year={2020},
url={https://arxiv.org/abs/1911.12146}
} | 3 | 51 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- 'no'
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
tags:
- structure-prediction
---
# Dataset Card for NorNE: Norwegian Named Entities
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [NorNE](https://github.com/ltgoslo/norne/)
- **Repository:** [Github](https://github.com/ltgoslo/norne/)
- **Paper:** https://arxiv.org/abs/1911.12146
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons,organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
### Supported Tasks and Leaderboards
NorNE ads named entity annotations on top of the Norwegian Dependency Treebank.
### Languages
Both Norwegian Bokmål (`bokmaal`) and Nynorsk (`nynorsk`) are supported as different configs in this dataset. An extra config for the combined languages is also included (`combined`). See the Annotation section for details on accessing reduced tag sets for the NER feature.
## Dataset Structure
Each entry contains text sentences, their language, identifiers, tokens, lemmas, and corresponding NER and POS tag lists.
### Data Instances
An example of the `train` split of the `bokmaal` config.
```python
{'idx': '000001',
'lang': 'bokmaal',
'lemmas': ['lam', 'og', 'piggvar', 'på', 'bryllupsmeny'],
'ner_tags': [0, 0, 0, 0, 0],
'pos_tags': [0, 9, 0, 5, 0],
'text': 'Lam og piggvar på bryllupsmenyen',
'tokens': ['Lam', 'og', 'piggvar', 'på', 'bryllupsmenyen']}
```
### Data Fields
Each entry is annotated with the next fields:
- `idx` (`int`), text (sentence) identifier from the NorNE dataset
- `lang` (`str`), language variety, either `bokmaal`, `nynorsk` or `combined`
- `text` (`str`), plain text
- `tokens` (`List[str]`), list of tokens extracted from `text`
- `lemmas` (`List[str]`), list of lemmas extracted from `tokens`
- `ner_tags` (`List[int]`), list of numeric NER tags for each token in `tokens`
- `pos_tags` (`List[int]`), list of numeric PoS tags for each token in `tokens`
An example DataFrame obtained from the dataset:
<table class="dataframe" border="1">
<thead>
<tr style="text-align: right;">
<th></th>
<th>idx</th>
<th>lang</th>
<th>text</th>
<th>tokens</th>
<th>lemmas</th>
<th>ner_tags</th>
<th>pos_tags</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>000001</td>
<td>bokmaal</td>
<td>Lam og piggvar på bryllupsmenyen</td>
<td>[Lam, og, piggvar, på, bryllupsmenyen]</td>
<td>[lam, og, piggvar, på, bryllupsmeny]</td>
<td>[0, 0, 0, 0, 0]</td>
<td>[0, 9, 0, 5, 0]</td>
</tr>
<tr>
<th>1</th>
<td>000002</td>
<td>bokmaal</td>
<td>Kamskjell, piggvar og lammefilet sto på menyen...</td>
<td>[Kamskjell, ,, piggvar, og, lammefilet, sto, p...</td>
<td>[kamskjell, $,, piggvar, og, lammefilet, stå, ...</td>
<td>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]</td>
<td>[0, 1, 0, 9, 0, 15, 2, 0, 2, 8, 6, 0, 1]</td>
</tr>
<tr>
<th>2</th>
<td>000003</td>
<td>bokmaal</td>
<td>Og til dessert: Parfait à la Mette-Marit.</td>
<td>[Og, til, dessert, :, Parfait, à, la, Mette-Ma...</td>
<td>[og, til, dessert, $:, Parfait, à, la, Mette-M...</td>
<td>[0, 0, 0, 0, 7, 8, 8, 8, 0]</td>
<td>[9, 2, 0, 1, 10, 12, 12, 10, 1]</td>
</tr>
</tbody>
</table>
### Data Splits
There are three splits: `train`, `validation` and `test`.
| Config | Split | Total |
| :---------|-------------:|-------:|
| `bokmaal` | `train` | 15696 |
| `bokmaal` | `validation` | 2410 |
| `bokmaal` | `test` | 1939 |
| `nynorsk` | `train` | 14174 |
| `nynorsk` | `validation` | 1890 |
| `nynorsk` | `test` | 1511 |
| `combined`| `test` | 29870 |
| `combined`| `validation` | 4300 |
| `combined`| `test` | 3450 |
## Dataset Creation
### Curation Rationale
1. A _name_ in this context is close to [Saul Kripke's definition of a name](https://en.wikipedia.org/wiki/Saul_Kripke#Naming_and_Necessity),
in that a name has a unique reference and its meaning is constant (there are exceptions in the annotations, e.g. "Regjeringen" (en. "Government")).
2. It is the usage of a name that determines the entity type, not the default/literal sense of the name,
3. If there is an ambiguity in the type/sense of a name, then the the default/literal sense of the name is chosen
(following [Markert and Nissim, 2002](http://www.lrec-conf.org/proceedings/lrec2002/pdf/11.pdf)).
For more details, see the "Annotation Guidelines.pdf" distributed with the corpus.
### Source Data
Data was collected using blogs and newspapers in Norwegian, as well as parliament speeches and governamental reports.
#### Initial Data Collection and Normalization
The texts in the Norwegian Dependency Treebank (NDT) are manually annotated with morphological features, syntactic functions
and hierarchical structure. The formalism used for the syntactic annotation is dependency grammar.
The treebanks consists of two parts, one part in Norwegian Bokmål (`nob`) and one part in Norwegian Nynorsk (`nno`).
Both parts contain around 300.000 tokens, and are a mix of different non-fictional genres.
See the [NDT webpage](https://www.nb.no/sprakbanken/show?serial=sbr-10) for more details.
### Annotations
The following types of entities are annotated:
- **Person (`PER`):** Real or fictional characters and animals
- **Organization (`ORG`):** Any collection of people, such as firms, institutions, organizations, music groups,
sports teams, unions, political parties etc.
- **Location (`LOC`):** Geographical places, buildings and facilities
- **Geo-political entity (`GPE`):** Geographical regions defined by political and/or social groups.
A GPE entity subsumes and does not distinguish between a nation, its region, its government, or its people
- **Product (`PROD`):** Artificially produced entities are regarded products. This may include more abstract entities, such as speeches,
radio shows, programming languages, contracts, laws and ideas.
- **Event (`EVT`):** Festivals, cultural events, sports events, weather phenomena, wars, etc. Events are bounded in time and space.
- **Derived (`DRV`):** Words (and phrases?) that are dervied from a name, but not a name in themselves. They typically contain a full name and are capitalized, but are not proper nouns. Examples (fictive) are "Brann-treneren" ("the Brann coach") or "Oslo-mannen" ("the man from Oslo").
- **Miscellaneous (`MISC`):** Names that do not belong in the other categories. Examples are animals species and names of medical conditions. Entities that are manufactured or produced are of type Products, whereas thing naturally or spontaneously occurring are of type Miscellaneous.
Furthermore, all `GPE` entities are additionally sub-categorized as being either `ORG` or `LOC`, with the two annotation levels separated by an underscore:
- `GPE_LOC`: Geo-political entity, with a locative sense (e.g. "John lives in _Spain_")
- `GPE_ORG`: Geo-political entity, with an organisation sense (e.g. "_Spain_ declined to meet with Belgium")
The two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. This means that the following sets of entity types can be derived:
- 7 types, deleting `_GPE`: **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 8 types, deleting `LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 9 types, keeping all types: **`ORG`**, **`LOC`**, **`GPE_LOC`**, **`GPE_ORG`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
The class distribution is as follows, broken down across the data splits of the UD version of NDT, and sorted by total counts (i.e. the number of examples, not tokens within the spans of the annotatons):
| Type | Train | Dev | Test | Total |
| :--------|-------:|-------:|-------:|-------:|
| `PER` | 4033 | 607 | 560 | 5200 |
| `ORG` | 2828 | 400 | 283 | 3511 |
| `GPE_LOC`| 2132 | 258 | 257 | 2647 |
| `PROD` | 671 | 162 | 71 | 904 |
| `LOC` | 613 | 109 | 103 | 825 |
| `GPE_ORG`| 388 | 55 | 50 | 493 |
| `DRV` | 519 | 77 | 48 | 644 |
| `EVT` | 131 | 9 | 5 | 145 |
| `MISC` | 8 | 0 | 0 | 0 |
To access these reduce versions of the dataset, you can use the configs `bokmaal-7`, `nynorsk-7`, `combined-7` for the NER tag set with 7 tags ( **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`), and `bokmaal-8`, `nynorsk-8`, `combined-8` for the NER tag set with 8 tags (`LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`). By default, the full set (9 tags) will be used.
## Additional Information
### Dataset Curators
NorNE was created as a collaboration between [Schibsted Media Group](https://schibsted.com/), [Språkbanken](https://www.nb.no/forskning/sprakbanken/) at the [National Library of Norway](https://www.nb.no) and the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) at the University of Oslo.
NorNE was added to Huggingface Datasets by the AI-Lab at the National Library of Norway.
### Licensing Information
The NorNE corpus is published under the same [license](https://github.com/ltgoslo/norne/blob/master/LICENSE_NDT.txt) as the Norwegian Dependency Treebank
### Citation Information
This dataset is described in the paper _NorNE: Annotating Named Entities for Norwegian_ by
Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg, Lilja Øvrelid, and Erik Velldal, accepted for LREC 2020 and available as pre-print here: https://arxiv.org/abs/1911.12146.
| 11,421 | [
[
-0.047698974609375,
-0.0487060546875,
0.00847625732421875,
0.00841522216796875,
-0.0216064453125,
-0.0149688720703125,
-0.02337646484375,
-0.0302276611328125,
0.04229736328125,
0.038330078125,
-0.0335693359375,
-0.06158447265625,
-0.043212890625,
0.031860351... |
quickdraw | 2023-06-26T12:09:26.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:1704.... | null | The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!.
The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located. | @article{DBLP:journals/corr/HaE17,
author = {David Ha and
Douglas Eck},
title = {A Neural Representation of Sketch Drawings},
journal = {CoRR},
volume = {abs/1704.03477},
year = {2017},
url = {http://arxiv.org/abs/1704.03477},
archivePrefix = {arXiv},
eprint = {1704.03477},
timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/HaE17},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 8 | 51 | 2022-06-09T09:56:43 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: quick-draw-dataset
pretty_name: Quick, Draw!
dataset_info:
- config_name: raw
features:
- name: key_id
dtype: string
- name: word
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
- name: recognized
dtype: bool
- name: timestamp
dtype: timestamp[us, tz=UTC]
- name: countrycode
dtype: string
- name: drawing
sequence:
- name: x
sequence: float32
- name: y
sequence: float32
- name: t
sequence: int32
splits:
- name: train
num_bytes: 134763164880
num_examples: 50426266
download_size: 194810597157
dataset_size: 134763164880
- config_name: preprocessed_simplified_drawings
features:
- name: key_id
dtype: string
- name: word
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
- name: recognized
dtype: bool
- name: timestamp
dtype: timestamp[us, tz=UTC]
- name: countrycode
dtype: string
- name: drawing
sequence:
- name: x
sequence: uint8
- name: y
sequence: uint8
splits:
- name: train
num_bytes: 9741454188
num_examples: 50426266
download_size: 5889968422
dataset_size: 9741454188
- config_name: preprocessed_bitmaps
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
splits:
- name: train
num_bytes: 20372624628
num_examples: 50426266
download_size: 39534220144
dataset_size: 20372624628
- config_name: sketch_rnn
features:
- name: word
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
- name: drawing
dtype:
array2_d:
shape:
- 3
dtype: int16
splits:
- name: train
num_bytes: 13056229420
num_examples: 24150000
- name: validation
num_bytes: 466485546
num_examples: 862500
- name: test
num_bytes: 466191706
num_examples: 862500
download_size: 3928904911
dataset_size: 13988906672
- config_name: sketch_rnn_full
features:
- name: word
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
- name: drawing
dtype:
array2_d:
shape:
- 3
dtype: int16
splits:
- name: train
num_bytes: 23725242280
num_examples: 43988874
- name: validation
num_bytes: 466485546
num_examples: 862500
- name: test
num_bytes: 466191706
num_examples: 862500
download_size: 6928245966
dataset_size: 24657919532
---
# Dataset Card for Quick, Draw!
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Quick, Draw! homepage](https://quickdraw.withgoogle.com/data)
- **Repository:** [Quick, Draw! repository](https://github.com/googlecreativelab/quickdraw-dataset)
- **Paper:** [A Neural Representation of Sketch Drawings](https://arxiv.org/abs/1704.03477v4)
- **Leaderboard:** [Quick, Draw! Doodle Recognition Challenge](https://www.kaggle.com/competitions/quickdraw-doodle-recognition/leaderboard)
- **Point of Contact:** [Quick, Draw! support](mailto:quickdraw-support@google.com)
### Dataset Summary
The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given sketch into one of 345 classes.
The (closed) leaderboard for this task is available [here](https://www.kaggle.com/competitions/quickdraw-doodle-recognition/leaderboard).
### Languages
English.
## Dataset Structure
### Data Instances
#### `raw`
A data point comprises a drawing and its metadata.
```
{
'key_id': '5475678961008640',
'word': 0,
'recognized': True,
'timestamp': datetime.datetime(2017, 3, 28, 13, 28, 0, 851730),
'countrycode': 'MY',
'drawing': {
'x': [[379.0, 380.0, 381.0, 381.0, 381.0, 381.0, 382.0], [362.0, 368.0, 375.0, 380.0, 388.0, 393.0, 399.0, 404.0, 409.0, 410.0, 410.0, 405.0, 397.0, 392.0, 384.0, 377.0, 370.0, 363.0, 356.0, 348.0, 342.0, 336.0, 333.0], ..., [477.0, 473.0, 471.0, 469.0, 468.0, 466.0, 464.0, 462.0, 461.0, 469.0, 475.0, 483.0, 491.0, 499.0, 510.0, 521.0, 531.0, 540.0, 548.0, 558.0, 566.0, 576.0, 583.0, 590.0, 595.0, 598.0, 597.0, 596.0, 594.0, 592.0, 590.0, 589.0, 588.0, 586.0]],
'y': [[1.0, 7.0, 15.0, 21.0, 27.0, 32.0, 32.0], [17.0, 17.0, 17.0, 17.0, 16.0, 16.0, 16.0, 16.0, 18.0, 23.0, 29.0, 32.0, 32.0, 32.0, 29.0, 27.0, 25.0, 23.0, 21.0, 19.0, 17.0, 16.0, 14.0], ..., [151.0, 146.0, 139.0, 131.0, 125.0, 119.0, 113.0, 107.0, 102.0, 99.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 100.0, 102.0, 104.0, 105.0, 110.0, 115.0, 121.0, 126.0, 131.0, 137.0, 142.0, 148.0, 150.0]],
't': [[0, 84, 100, 116, 132, 148, 260], [573, 636, 652, 660, 676, 684, 701, 724, 796, 838, 860, 956, 973, 979, 989, 995, 1005, 1012, 1020, 1028, 1036, 1053, 1118], ..., [8349, 8446, 8468, 8484, 8500, 8516, 8541, 8557, 8573, 8685, 8693, 8702, 8710, 8718, 8724, 8732, 8741, 8748, 8757, 8764, 8773, 8780, 8788, 8797, 8804, 8965, 8996, 9029, 9045, 9061, 9076, 9092, 9109, 9167]]
}
}
```
#### `preprocessed_simplified_drawings`
The simplified version of the dataset generated from the `raw` data with the simplified vectors, removed timing information, and the data positioned and scaled into a 256x256 region.
The simplification process was:
1.Align the drawing to the top-left corner, to have minimum values of 0.
2.Uniformly scale the drawing, to have a maximum value of 255.
3.Resample all strokes with a 1 pixel spacing.
4.Simplify all strokes using the [Ramer-Douglas-Peucker algorithm](https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm) with an epsilon value of 2.0.
```
{
'key_id': '5475678961008640',
'word': 0,
'recognized': True,
'timestamp': datetime.datetime(2017, 3, 28, 15, 28),
'countrycode': 'MY',
'drawing': {
'x': [[31, 32], [27, 37, 38, 35, 21], [25, 28, 38, 39], [33, 34, 32], [5, 188, 254, 251, 241, 185, 45, 9, 0], [35, 35, 43, 125, 126], [35, 76, 80, 77], [53, 50, 54, 80, 78]],
'y': [[0, 7], [4, 4, 6, 7, 3], [5, 10, 10, 7], [4, 33, 44], [50, 50, 54, 83, 86, 90, 86, 77, 52], [85, 91, 92, 96, 90], [35, 37, 41, 47], [34, 23, 22, 23, 34]]
}
}
```
#### `preprocessed_bitmaps` (default configuration)
This configuration contains the 28x28 grayscale bitmap images that were generated from the simplified data, but are aligned to the center of the drawing's bounding box rather than the top-left corner. The code that was used for generation is available [here](https://github.com/googlecreativelab/quickdraw-dataset/issues/19#issuecomment-402247262).
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x10B5B102828>,
'label': 0
}
```
#### `sketch_rnn` and `sketch_rnn_full`
The `sketch_rnn_full` configuration stores the data in the format suitable for inputs into a recurrent neural network and was used for for training the [Sketch-RNN](https://arxiv.org/abs/1704.03477) model. Unlike `sketch_rnn` where the samples have been randomly selected from each category, the `sketch_rnn_full` configuration contains the full data for each category.
```
{
'word': 0,
'drawing': [[132, 0, 0], [23, 4, 0], [61, 1, 0], [76, 0, 0], [22, -4, 0], [152, 0, 0], [50, -5, 0], [36, -10, 0], [8, 26, 0], [0, 69, 0], [-2, 11, 0], [-8, 10, 0], [-56, 24, 0], [-23, 14, 0], [-99, 40, 0], [-45, 6, 0], [-21, 6, 0], [-170, 2, 0], [-81, 0, 0], [-29, -9, 0], [-94, -19, 0], [-48, -24, 0], [-6, -16, 0], [2, -36, 0], [7, -29, 0], [23, -45, 0], [13, -6, 0], [41, -8, 0], [42, -2, 1], [392, 38, 0], [2, 19, 0], [11, 33, 0], [13, 0, 0], [24, -9, 0], [26, -27, 0], [0, -14, 0], [-8, -10, 0], [-18, -5, 0], [-14, 1, 0], [-23, 4, 0], [-21, 12, 1], [-152, 18, 0], [10, 46, 0], [26, 6, 0], [38, 0, 0], [31, -2, 0], [7, -2, 0], [4, -6, 0], [-10, -21, 0], [-2, -33, 0], [-6, -11, 0], [-46, 1, 0], [-39, 18, 0], [-19, 4, 1], [-122, 0, 0], [-2, 38, 0], [4, 16, 0], [6, 4, 0], [78, 0, 0], [4, -8, 0], [-8, -36, 0], [0, -22, 0], [-6, -2, 0], [-32, 14, 0], [-58, 13, 1], [-96, -12, 0], [-10, 27, 0], [2, 32, 0], [102, 0, 0], [1, -7, 0], [-27, -17, 0], [-4, -6, 0], [-1, -34, 0], [-64, 8, 1], [129, -138, 0], [-108, 0, 0], [-8, 12, 0], [-1, 15, 0], [12, 15, 0], [20, 5, 0], [61, -3, 0], [24, 6, 0], [19, 0, 0], [5, -4, 0], [2, 14, 1]]
}
```
### Data Fields
#### `raw`
- `key_id`: A unique identifier across all drawings.
- `word`: Category the player was prompted to draw.
- `recognized`: Whether the word was recognized by the game.
- `timestamp`: When the drawing was created.
- `countrycode`: A two letter country code ([ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)) of where the player was located.
- `drawing`: A dictionary where `x` and `y` are the pixel coordinates, and `t` is the time in milliseconds since the first point. `x` and `y` are real-valued while `t` is an integer. `x`, `y` and `t` match in lenght and are represented as lists of lists where each sublist corresponds to a single stroke. The raw drawings can have vastly different bounding boxes and number of points due to the different devices used for display and input.
#### `preprocessed_simplified_drawings`
- `key_id`: A unique identifier across all drawings.
- `word`: Category the player was prompted to draw.
- `recognized`: Whether the word was recognized by the game.
- `timestamp`: When the drawing was created.
- `countrycode`: A two letter country code ([ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)) of where the player was located.
- `drawing`: A simplified drawing represented as a dictionary where `x` and `y` are the pixel coordinates. The simplification processed is described in the `Data Instances` section.
#### `preprocessed_bitmaps` (default configuration)
- `image`: A `PIL.Image.Image` object containing the 28x28 grayscale bitmap. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `label`: Category the player was prompted to draw.
<details>
<summary>
Click here to see the full class labels mapping:
</summary>
|id|class|
|---|---|
|0|aircraft carrier|
|1|airplane|
|2|alarm clock|
|3|ambulance|
|4|angel|
|5|animal migration|
|6|ant|
|7|anvil|
|8|apple|
|9|arm|
|10|asparagus|
|11|axe|
|12|backpack|
|13|banana|
|14|bandage|
|15|barn|
|16|baseball bat|
|17|baseball|
|18|basket|
|19|basketball|
|20|bat|
|21|bathtub|
|22|beach|
|23|bear|
|24|beard|
|25|bed|
|26|bee|
|27|belt|
|28|bench|
|29|bicycle|
|30|binoculars|
|31|bird|
|32|birthday cake|
|33|blackberry|
|34|blueberry|
|35|book|
|36|boomerang|
|37|bottlecap|
|38|bowtie|
|39|bracelet|
|40|brain|
|41|bread|
|42|bridge|
|43|broccoli|
|44|broom|
|45|bucket|
|46|bulldozer|
|47|bus|
|48|bush|
|49|butterfly|
|50|cactus|
|51|cake|
|52|calculator|
|53|calendar|
|54|camel|
|55|camera|
|56|camouflage|
|57|campfire|
|58|candle|
|59|cannon|
|60|canoe|
|61|car|
|62|carrot|
|63|castle|
|64|cat|
|65|ceiling fan|
|66|cell phone|
|67|cello|
|68|chair|
|69|chandelier|
|70|church|
|71|circle|
|72|clarinet|
|73|clock|
|74|cloud|
|75|coffee cup|
|76|compass|
|77|computer|
|78|cookie|
|79|cooler|
|80|couch|
|81|cow|
|82|crab|
|83|crayon|
|84|crocodile|
|85|crown|
|86|cruise ship|
|87|cup|
|88|diamond|
|89|dishwasher|
|90|diving board|
|91|dog|
|92|dolphin|
|93|donut|
|94|door|
|95|dragon|
|96|dresser|
|97|drill|
|98|drums|
|99|duck|
|100|dumbbell|
|101|ear|
|102|elbow|
|103|elephant|
|104|envelope|
|105|eraser|
|106|eye|
|107|eyeglasses|
|108|face|
|109|fan|
|110|feather|
|111|fence|
|112|finger|
|113|fire hydrant|
|114|fireplace|
|115|firetruck|
|116|fish|
|117|flamingo|
|118|flashlight|
|119|flip flops|
|120|floor lamp|
|121|flower|
|122|flying saucer|
|123|foot|
|124|fork|
|125|frog|
|126|frying pan|
|127|garden hose|
|128|garden|
|129|giraffe|
|130|goatee|
|131|golf club|
|132|grapes|
|133|grass|
|134|guitar|
|135|hamburger|
|136|hammer|
|137|hand|
|138|harp|
|139|hat|
|140|headphones|
|141|hedgehog|
|142|helicopter|
|143|helmet|
|144|hexagon|
|145|hockey puck|
|146|hockey stick|
|147|horse|
|148|hospital|
|149|hot air balloon|
|150|hot dog|
|151|hot tub|
|152|hourglass|
|153|house plant|
|154|house|
|155|hurricane|
|156|ice cream|
|157|jacket|
|158|jail|
|159|kangaroo|
|160|key|
|161|keyboard|
|162|knee|
|163|knife|
|164|ladder|
|165|lantern|
|166|laptop|
|167|leaf|
|168|leg|
|169|light bulb|
|170|lighter|
|171|lighthouse|
|172|lightning|
|173|line|
|174|lion|
|175|lipstick|
|176|lobster|
|177|lollipop|
|178|mailbox|
|179|map|
|180|marker|
|181|matches|
|182|megaphone|
|183|mermaid|
|184|microphone|
|185|microwave|
|186|monkey|
|187|moon|
|188|mosquito|
|189|motorbike|
|190|mountain|
|191|mouse|
|192|moustache|
|193|mouth|
|194|mug|
|195|mushroom|
|196|nail|
|197|necklace|
|198|nose|
|199|ocean|
|200|octagon|
|201|octopus|
|202|onion|
|203|oven|
|204|owl|
|205|paint can|
|206|paintbrush|
|207|palm tree|
|208|panda|
|209|pants|
|210|paper clip|
|211|parachute|
|212|parrot|
|213|passport|
|214|peanut|
|215|pear|
|216|peas|
|217|pencil|
|218|penguin|
|219|piano|
|220|pickup truck|
|221|picture frame|
|222|pig|
|223|pillow|
|224|pineapple|
|225|pizza|
|226|pliers|
|227|police car|
|228|pond|
|229|pool|
|230|popsicle|
|231|postcard|
|232|potato|
|233|power outlet|
|234|purse|
|235|rabbit|
|236|raccoon|
|237|radio|
|238|rain|
|239|rainbow|
|240|rake|
|241|remote control|
|242|rhinoceros|
|243|rifle|
|244|river|
|245|roller coaster|
|246|rollerskates|
|247|sailboat|
|248|sandwich|
|249|saw|
|250|saxophone|
|251|school bus|
|252|scissors|
|253|scorpion|
|254|screwdriver|
|255|sea turtle|
|256|see saw|
|257|shark|
|258|sheep|
|259|shoe|
|260|shorts|
|261|shovel|
|262|sink|
|263|skateboard|
|264|skull|
|265|skyscraper|
|266|sleeping bag|
|267|smiley face|
|268|snail|
|269|snake|
|270|snorkel|
|271|snowflake|
|272|snowman|
|273|soccer ball|
|274|sock|
|275|speedboat|
|276|spider|
|277|spoon|
|278|spreadsheet|
|279|square|
|280|squiggle|
|281|squirrel|
|282|stairs|
|283|star|
|284|steak|
|285|stereo|
|286|stethoscope|
|287|stitches|
|288|stop sign|
|289|stove|
|290|strawberry|
|291|streetlight|
|292|string bean|
|293|submarine|
|294|suitcase|
|295|sun|
|296|swan|
|297|sweater|
|298|swing set|
|299|sword|
|300|syringe|
|301|t-shirt|
|302|table|
|303|teapot|
|304|teddy-bear|
|305|telephone|
|306|television|
|307|tennis racquet|
|308|tent|
|309|The Eiffel Tower|
|310|The Great Wall of China|
|311|The Mona Lisa|
|312|tiger|
|313|toaster|
|314|toe|
|315|toilet|
|316|tooth|
|317|toothbrush|
|318|toothpaste|
|319|tornado|
|320|tractor|
|321|traffic light|
|322|train|
|323|tree|
|324|triangle|
|325|trombone|
|326|truck|
|327|trumpet|
|328|umbrella|
|329|underwear|
|330|van|
|331|vase|
|332|violin|
|333|washing machine|
|334|watermelon|
|335|waterslide|
|336|whale|
|337|wheel|
|338|windmill|
|339|wine bottle|
|340|wine glass|
|341|wristwatch|
|342|yoga|
|343|zebra|
|344|zigzag|
</details>
#### `sketch_rnn` and `sketch_rnn_full`
- `word`: Category the player was prompted to draw.
- `drawing`: An array of strokes. Strokes are represented as 3-tuples consisting of x-offset, y-offset, and a binary variable which is 1 if the pen is lifted between this position and the next, and 0 otherwise.
<details>
<summary>
Click here to see the code for visualizing drawings in Jupyter Notebook or Google Colab:
</summary>
```python
import numpy as np
import svgwrite # pip install svgwrite
from IPython.display import SVG, display
def draw_strokes(drawing, factor=0.045):
"""Displays vector drawing as SVG.
Args:
drawing: a list of strokes represented as 3-tuples
factor: scaling factor. The smaller the scaling factor, the bigger the SVG picture and vice versa.
"""
def get_bounds(data, factor):
"""Return bounds of data."""
min_x = 0
max_x = 0
min_y = 0
max_y = 0
abs_x = 0
abs_y = 0
for i in range(len(data)):
x = float(data[i, 0]) / factor
y = float(data[i, 1]) / factor
abs_x += x
abs_y += y
min_x = min(min_x, abs_x)
min_y = min(min_y, abs_y)
max_x = max(max_x, abs_x)
max_y = max(max_y, abs_y)
return (min_x, max_x, min_y, max_y)
data = np.array(drawing)
min_x, max_x, min_y, max_y = get_bounds(data, factor)
dims = (50 + max_x - min_x, 50 + max_y - min_y)
dwg = svgwrite.Drawing(size=dims)
dwg.add(dwg.rect(insert=(0, 0), size=dims,fill='white'))
lift_pen = 1
abs_x = 25 - min_x
abs_y = 25 - min_y
p = "M%s,%s " % (abs_x, abs_y)
command = "m"
for i in range(len(data)):
if (lift_pen == 1):
command = "m"
elif (command != "l"):
command = "l"
else:
command = ""
x = float(data[i,0])/factor
y = float(data[i,1])/factor
lift_pen = data[i, 2]
p += command+str(x)+","+str(y)+" "
the_color = "black"
stroke_width = 1
dwg.add(dwg.path(p).stroke(the_color,stroke_width).fill("none"))
display(SVG(dwg.tostring()))
```
</details>
> **Note**: Sketch-RNN takes for input strokes represented as 5-tuples with drawings padded to a common maximum length and prefixed by the special start token `[0, 0, 1, 0, 0]`. The 5-tuple representation consists of x-offset, y-offset, and p_1, p_2, p_3, a binary one-hot vector of 3 possible pen states: pen down, pen up, end of sketch. More precisely, the first two elements are the offset distance in the x and y directions of the pen from the previous point. The last 3 elements represents a binary one-hot vector of 3 possible states. The first pen state, p1, indicates that the pen is currently touching the paper, and that a line will be drawn connecting the next point with the current point. The second pen state, p2, indicates that the pen will be lifted from the paper after the current point, and that no line will be drawn next. The final pen state, p3, indicates that the drawing has ended, and subsequent points, including the current point, will not be rendered.
><details>
> <summary>
> Click here to see the code for converting drawings to Sketch-RNN input format:
> </summary>
>
> ```python
> def to_sketch_rnn_format(drawing, max_len):
> """Converts a drawing to Sketch-RNN input format.
>
> Args:
> drawing: a list of strokes represented as 3-tuples
> max_len: maximum common length of all drawings
>
> Returns:
> NumPy array
> """
> drawing = np.array(drawing)
> result = np.zeros((max_len, 5), dtype=float)
> l = len(drawing)
> assert l <= max_len
> result[0:l, 0:2] = drawing[:, 0:2]
> result[0:l, 3] = drawing[:, 2]
> result[0:l, 2] = 1 - result[0:l, 3]
> result[l:, 4] = 1
> # Prepend special start token
> result = np.vstack([[0, 0, 1, 0, 0], result])
> return result
> ```
>
></details>
### Data Splits
In the configurations `raw`, `preprocessed_simplified_drawings` and `preprocessed_bitamps` (default configuration), all the data is contained in the training set, which has 50426266 examples.
`sketch_rnn` and `sketch_rnn_full` have the data split into training, validation and test split. In the `sketch_rnn` configuration, 75K samples (70K Training, 2.5K Validation, 2.5K Test) have been randomly selected from each category. Therefore, the training set contains 24150000 examples, the validation set 862500 examples and the test set 862500 examples. The `sketch_rnn_full` configuration has the full (training) data for each category, which leads to the training set having 43988874 examples, the validation set 862500 and the test set 862500 examples.
## Dataset Creation
### Curation Rationale
From the GitHub repository:
> The Quick Draw Dataset is a collection of 50 million drawings across [345 categories](categories.txt), contributed by players of the game [Quick, Draw!](https://quickdraw.withgoogle.com). The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located. You can browse the recognized drawings on [quickdraw.withgoogle.com/data](https://quickdraw.withgoogle.com/data).
>
> We're sharing them here for developers, researchers, and artists to explore, study, and learn from
### Source Data
#### Initial Data Collection and Normalization
This dataset contains vector drawings obtained from [Quick, Draw!](https://quickdraw.withgoogle.com/), an online game where the players are asked to draw objects belonging to a particular object class in less than 20 seconds.
#### Who are the source language producers?
The participants in the [Quick, Draw!](https://quickdraw.withgoogle.com/) game.
### Annotations
#### Annotation process
The annotations are machine-generated and match the category the player was prompted to draw.
#### Who are the annotators?
The annotations are machine-generated.
### Personal and Sensitive Information
Some sketches are known to be problematic (see https://github.com/googlecreativelab/quickdraw-dataset/issues/74 and https://github.com/googlecreativelab/quickdraw-dataset/issues/18).
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
## Additional Information
### Dataset Curators
Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim and Nick Fox-Gieg.
### Licensing Information
The data is made available by Google, Inc. under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
```bibtex
@article{DBLP:journals/corr/HaE17,
author = {David Ha and
Douglas Eck},
title = {A Neural Representation of Sketch Drawings},
journal = {CoRR},
volume = {abs/1704.03477},
year = {2017},
url = {http://arxiv.org/abs/1704.03477},
archivePrefix = {arXiv},
eprint = {1704.03477},
timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/HaE17},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | 67,305 | [
[
-0.03790283203125,
-0.031585693359375,
0.020751953125,
0.006519317626953125,
-0.02337646484375,
0.0124664306640625,
-0.0007839202880859375,
-0.0266876220703125,
0.0594482421875,
0.029998779296875,
-0.053070068359375,
-0.0623779296875,
-0.0445556640625,
0.011... |
ITESM/embedded_faqs_medicare | 2022-06-14T22:06:28.000Z | [
"region:us"
] | ITESM | null | null | 0 | 51 | 2022-06-14T22:00:33 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
tarteel-ai/quranqa | 2022-07-27T02:28:31.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"license:cc-by-nd-4.0",
"qu... | tarteel-ai | The absence of publicly available reusable test collections for Arabic question answering on the Holy Qur’an has impeded the possibility of fairly comparing the performance of systems in that domain. In this article, we introduce AyaTEC, a reusable test collection for verse-based question answering on the Holy Qur’an, which serves as a common experimental testbed for this task. AyaTEC includes 207 questions (with their corresponding 1,762 answers) covering 11 topic categories of the Holy Qur’an that target the information needs of both curious and skeptical users. To the best of our effort, the answers to the questions (each represented as a sequence of verses) in AyaTEC were exhaustive—that is, all qur’anic verses that directly answered the questions were exhaustively extracted and annotated. To facilitate the use of AyaTEC in evaluating the systems designed for that task, we propose several evaluation measures to support the different types of questions and the nature of verse-based answers while integrating the concept of partial matching of answers in the evaluation. | @article{malhas2020ayatec,
author = {Malhas, Rana and Elsayed, Tamer},
title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an},
year = {2020},
issue_date = {November 2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {19},
number = {6},
issn = {2375-4699},
url = {https://doi.org/10.1145/3400396},
doi = {10.1145/3400396},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
month = {oct},
articleno = {78},
numpages = {21},
keywords = {evaluation, Classical Arabic}
} | 6 | 51 | 2022-07-26T20:05:10 | ---
annotations_creators:
- expert-generated
language:
- ar
language_creators:
- expert-generated
license:
- cc-by-nd-4.0
multilinguality:
- monolingual
pretty_name: Qur'anic Reading Comprehension Dataset
size_categories:
- n<1K
- 1K<n<10K
source_datasets:
- original
tags:
- quran
- qa
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/view/quran-qa-2022/home
- **Repository:** https://gitlab.com/bigirqu/quranqa/-/tree/main/
- **Paper:** https://dl.acm.org/doi/10.1145/3400396
- **Leaderboard:**
- **Point of Contact:** @piraka9011
### Dataset Summary
The QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are
coupled with their extracted answers to constitute 1,337 question-passage-answer triplets.
### Supported Tasks and Leaderboards
This task is evaluated as a ranking task.
To give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully
match one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure.
It is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching.
pRR is the official evaluation measure of this shared task.
We will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer.
The EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the
gold answers.
Whereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer.
To get an overall evaluation score, each of the above measures is averaged over all questions.
### Languages
Qur'anic Arabic
## Dataset Structure
### Data Instances
To simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain
one or more answers to that question, as shown below:
```json
{
"pq_id": "38:41-44_105",
"passage": "واذكر عبدنا أيوب إذ نادى ربه أني مسني الشيطان بنصب وعذاب. اركض برجلك هذا مغتسل بارد وشراب. ووهبنا له أهله ومثلهم معهم رحمة منا وذكرى لأولي الألباب. وخذ بيدك ضغثا فاضرب به ولا تحنث إنا وجدناه صابرا نعم العبد إنه أواب.",
"surah": 38,
"verses": "41-44",
"question": "من هو النبي المعروف بالصبر؟",
"answers": [
{
"text": "أيوب",
"start_char": 12
}
]
}
```
Each Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different
question.
Likewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a
different Qur’anic passage.
The source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the
Holy Qur'an in several scripting styles.
We have chosen the simple-clean text style of Tanzil version 1.0.2.
### Data Fields
* `pq_id`: Sample ID
* `passage`: Context text
* `surah`: Surah number
* `verses`: Verse range
* `question`: Question text
* `answers`: List of answers and their start character
### Data Splits
| **Dataset** | **%** | **# Question-Passage Pairs** | **# Question-Passage-Answer Triplets** |
|-------------|:-----:|:-----------------------------:|:---------------------------------------:|
| Training | 65% | 710 | 861 |
| Development | 10% | 109 | 128 |
| Test | 25% | 274 | 348 |
| All | 100% | 1,093 | 1,337 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License https://creativecommons.org/licenses/by-nd/4.0/legalcode
For a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to https://creativecommons.org/licenses/by-nd/4.0/
### Citation Information
```
@article{malhas2020ayatec,
author = {Malhas, Rana and Elsayed, Tamer},
title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an},
year = {2020},
issue_date = {November 2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {19},
number = {6},
issn = {2375-4699},
url = {https://doi.org/10.1145/3400396},
doi = {10.1145/3400396},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
month = {oct},
articleno = {78},
numpages = {21},
keywords = {evaluation, Classical Arabic}
}
```
### Contributions
Thanks to [@piraka9011](https://github.com/piraka9011) for adding this dataset.
| 6,699 | [
[
-0.029327392578125,
-0.0478515625,
0.0090789794921875,
0.00902557373046875,
-0.0295867919921875,
0.0022907257080078125,
0.0019502639770507812,
-0.00887298583984375,
0.002864837646484375,
0.04205322265625,
-0.0479736328125,
-0.05535888671875,
-0.0227813720703125,... |
PlanTL-GOB-ES/WikiCAT_esv2 | 2023-07-27T09:13:16.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:automatically-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:es",
"license:cc-by-sa-3.0",
"region:us"
] | PlanTL-GOB-ES | WikiCAT: Text Classification Spanish dataset from the Viquipedia | 0 | 51 | 2022-11-18T10:18:53 | ---
YAML tags:
annotations_creators:
- automatically-generated
language_creators:
- found
language:
- es
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wikicat_esv2
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# WikiCAT_es: Spanish Text Classification dataset
## Dataset Description
- **Paper:**
- **Point of Contact:** carlos.rodriguez1@bsc.es
**Repository**
### Dataset Summary
WikiCAT_ca is a Spanish corpus for thematic Text Classification tasks. It is created automatically from Wikipedia and Wikidata sources, and contains 8401 articles from the Viquipedia classified under 12 different categories.
This dataset was developed by BSC TeMU as part of the PlanTL project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
ES- Spanish
## Dataset Structure
### Data Instances
Two json files, one for each split.
### Data Fields
We used a simple model with the article text and associated labels, without further metadata.
#### Example:
<pre>
{'sentence': 'La economía de Reunión se ha basado tradicionalmente en la agricultura. La caña de azúcar ha sido el cultivo principal durante más de un siglo, y en algunos años representa el 85% de las exportaciones. El gobierno ha estado impulsando el desarrollo de una industria turística para aliviar el alto desempleo, que representa más del 40% de la fuerza laboral.(...) El PIB total de la isla fue de 18.800 millones de dólares EE.UU. en 2007., 'label': 'Economía'}
</pre>
#### Labels
'Religión', 'Entretenimiento', 'Música', 'Ciencia_y_Tecnología', 'Política', 'Economía', 'Matemáticas', 'Humanidades', 'Deporte', 'Derecho', 'Historia', 'Filosofía'
### Data Splits
* hfeval_esv5.json: 1681 label-document pairs
* hftrain_esv5.json: 6716 label-document pairs
## Dataset Creation
### Methodology
La páginas de "Categoría" representan los temas.
para cada tema, extraemos las páginas asociadas a ese primer nivel de la jerarquía, y utilizamos el resúmen ("summary") como texto representativo.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The source data are thematic categories in the different Wikipedias
#### Who are the source language producers?
### Annotations
#### Annotation process
Automatic annotation
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Spanish.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing Information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Contributions
[N/A]
| 3,656 | [
[
-0.0204620361328125,
-0.0380859375,
0.01328277587890625,
0.0287017822265625,
-0.009979248046875,
0.0081634521484375,
-0.0268707275390625,
-0.0279693603515625,
0.039520263671875,
0.033782958984375,
-0.043701171875,
-0.07208251953125,
-0.050750732421875,
0.028... | |
gokuls/wiki_book_corpus_complete_processed_bert_dataset | 2023-02-25T19:22:14.000Z | [
"region:us"
] | gokuls | null | null | 0 | 51 | 2023-02-25T07:22:50 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 22201610400.0
num_examples: 6167114
download_size: 2763194793
dataset_size: 22201610400.0
---
# Dataset Card for "wiki_book_corpus_complete_processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 552 | [
[
-0.0401611328125,
-0.021575927734375,
0.01439666748046875,
0.01105499267578125,
-0.014373779296875,
-0.004039764404296875,
-0.01314544677734375,
-0.0173187255859375,
0.052947998046875,
0.0350341796875,
-0.055633544921875,
-0.0572509765625,
-0.033660888671875,
... |
Muennighoff/python-bugs | 2023-03-22T07:46:03.000Z | [
"region:us"
] | Muennighoff | null | null | 4 | 51 | 2023-03-22T07:45:19 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
Heerak/ko_en_parallel_dataset | 2023-04-20T08:51:52.000Z | [
"region:us"
] | Heerak | null | null | 0 | 51 | 2023-04-20T08:27:44 | ---
dataset_info:
features:
- name: ko
dtype: string
- name: en
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4112684317
num_examples: 11800415
- name: validation
num_bytes: 20767480
num_examples: 59299
- name: test
num_bytes: 419935
num_examples: 1982
download_size: 2691575595
dataset_size: 4133871732
---
# Dataset Card for "ko_en_parallel_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 580 | [
[
-0.0511474609375,
-0.0158843994140625,
0.029296875,
0.03350830078125,
-0.0203857421875,
0.005352020263671875,
0.0120391845703125,
-0.00598907470703125,
0.06103515625,
0.035736083984375,
-0.0546875,
-0.05950927734375,
-0.04693603515625,
-0.005645751953125,
... |
omniquad/BioNLP11ID-ggp-IOB | 2023-05-16T11:52:23.000Z | [
"region:us"
] | omniquad | The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: http://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/ | @article{Krallinger2015TheCC,
title={The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author={Martin Krallinger and Obdulia Rabal and Florian Leitner and Miguel Vazquez and David Salgado and Zhiyong Lu and Robert Leaman and Yanan Lu and Dong-Hong Ji and Daniel M. Lowe and Roger A. Sayle and Riza Theresa Batista-Navarro and Rafal Rak and Torsten Huber and Tim Rockt{\"a}schel and S{\'e}rgio Matos and David Campos and Buzhou Tang and Hua Xu and Tsendsuren Munkhdalai and Keun Ho Ryu and S. V. Ramanan and P. Senthil Nathan and Slavko Zitnik and Marko Bajec and Lutz Weber and Matthias Irmer and Saber Ahmad Akhondi and Jan A. Kors and Shuo Xu and Xin An and Utpal Kumar Sikdar and Asif Ekbal and Masaharu Yoshioka and Thaer M. Dieb and Miji Choi and Karin M. Verspoor and Madian Khabsa and C. Lee Giles and Hongfang Liu and K. E. Ravikumar and Andre Lamurias and Francisco M. Couto and Hong-Jie Dai and Richard Tzong-Han Tsai and C Ata and Tolga Can and Anabel Usie and Rui Alves and Isabel Segura-Bedmar and Paloma Mart{\'i}nez and Julen Oyarz{\'a}bal and Alfonso Valencia},
journal={Journal of Cheminformatics},
year={2015},
volume={7},
pages={S2 - S2}
} | 1 | 51 | 2023-05-16T10:55:32 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
mrm8488/databricks-dolly-15k-curated-es | 2023-06-13T10:42:43.000Z | [
"region:us"
] | mrm8488 | null | null | 0 | 51 | 2023-06-13T10:42:39 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: instruction_original_en
dtype: string
- name: context_original_en
dtype: string
- name: response_original_en
dtype: string
- name: id
dtype: int64
splits:
- name: es
num_bytes: 25902709
num_examples: 15015
download_size: 16490137
dataset_size: 25902709
---
# Dataset Card for "databricks-dolly-15k-curated-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 670 | [
[
-0.029052734375,
-0.0251007080078125,
0.00702667236328125,
0.032928466796875,
-0.0240325927734375,
0.0147705078125,
0.034942626953125,
-0.004180908203125,
0.0648193359375,
0.038970947265625,
-0.08056640625,
-0.055450439453125,
-0.037200927734375,
0.000419616... |
RIPS-Goog-23/IIT-CDIP | 2023-07-04T15:36:35.000Z | [
"region:us"
] | RIPS-Goog-23 | null | null | 2 | 51 | 2023-07-04T08:47:06 | ---
dataset_info:
features:
- name: tar_file_letters
dtype: string
- name: filename
dtype: string
- name: text
dtype: string
- name: bboxes
dtype: string
- name: img
dtype: string
- name: img_width
dtype: int64
- name: img_height
dtype: int64
splits:
- name: ra9
num_bytes: 91309162
num_examples: 2762
download_size: 81476979
dataset_size: 91309162
---
# Dataset Card for "IIT-CDIP-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 577 | [
[
-0.03411865234375,
-0.00493621826171875,
0.0045166015625,
0.039276123046875,
-0.0189056396484375,
0.01020050048828125,
0.0228271484375,
-0.014892578125,
0.034759521484375,
0.0242767333984375,
-0.043914794921875,
-0.034759521484375,
-0.056610107421875,
-0.029... |
sngsfydy/aptos_test | 2023-07-19T19:19:46.000Z | [
"region:us"
] | sngsfydy | null | null | 0 | 51 | 2023-07-19T19:18:30 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
splits:
- name: train
num_bytes: 1802932566.6624794
num_examples: 733
download_size: 1800938316
dataset_size: 1802932566.6624794
---
# Dataset Card for "aptos_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 536 | [
[
-0.0316162109375,
0.0015401840209960938,
0.007198333740234375,
0.01256561279296875,
-0.03448486328125,
-0.007415771484375,
0.0318603515625,
-0.0160675048828125,
0.05523681640625,
0.033416748046875,
-0.03863525390625,
-0.03765869140625,
-0.05303955078125,
-0.... |
theoldmandthesea/17k_business_book | 2023-08-20T08:14:02.000Z | [
"region:us"
] | theoldmandthesea | null | null | 0 | 51 | 2023-08-20T01:03:38 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,732 | [
[
-0.038238525390625,
-0.0298309326171875,
-0.0035877227783203125,
0.027099609375,
-0.0323486328125,
0.0037860870361328125,
-0.0172271728515625,
-0.0202178955078125,
0.049041748046875,
0.04046630859375,
-0.06353759765625,
-0.08062744140625,
-0.052947998046875,
... |
renumics/emodb-enriched | 2023-09-23T08:54:14.000Z | [
"size_categories:n<1K",
"region:us"
] | renumics | null | null | 0 | 51 | 2023-08-25T12:59:02 | ---
size_categories:
- n<1K
dataset_info:
features:
- name: age
dtype: float32
- name: gender
dtype:
class_label:
names:
'0': female
'1': male
- name: emotion
dtype:
class_label:
names:
'0': anger
'1': boredom
'2': disgust
'3': fear
'4': happiness
'5': neutral
'6': sadness
- name: audio
dtype: audio
- name: m1_gender_prediction
dtype:
class_label:
names:
'0': female
'1': male
- name: m2_gender_prediction
dtype:
class_label:
names:
'0': female
'1': male
- name: m1_embedding
sequence: float32
length: 1028
- name: m2_embedding
sequence: float32
length: 1028
- name: emotion_embedding
sequence: float32
length: 1024
- name: m1_correct
dtype:
class_label:
names:
'0': wrong
'1': correct
- name: m2_correct
dtype:
class_label:
names:
'0': wrong
'1': correct
splits:
- name: train
num_bytes: 54231717.0
num_examples: 535
download_size: 56965550
dataset_size: 54231717.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
## Dataset Description
About Dataset
Emo-DB Database
The EMODB database is the freely available German emotional database. The database is created by the Institute of Communication Science, Technical University, Berlin, Germany. Ten professional speakers (five males and five females) participated in data recording. The database contains a total of 535 utterances. The EMODB database comprises of seven emotions: 1) anger; 2) boredom; 3) anxiety; 4) happiness; 5) sadness; 6) disgust; and 7) neutral. The data was recorded at a 48-kHz sampling rate and then down-sampled to 16-kHz.
Additional Information
Original URL: https://www.tu.berlin/en/kw/research/projects/emotional-speech
Every utterance is named according to the same scheme:
Positions 1-2: number of speaker
Positions 3-5: code for text
Position 6: emotion (sorry, letter stands for german emotion word)
Position 7: if there are more than two versions these are numbered a, b, c ....
Example: 03a01Fa.wav is the audio file from Speaker 03 speaking text a01 with the emotion "Freude" (Happiness).
Information about the speakers
03 - male, 31 years old
08 - female, 34 years
09 - female, 21 years
10 - male, 32 years
11 - male, 26 years
12 - male, 30 years
13 - female, 32 years
14 - female, 35 years
15 - male, 25 years
16 - female, 31 years
| 2,712 | [
[
-0.051055908203125,
-0.06304931640625,
0.03271484375,
0.0286407470703125,
-0.02154541015625,
-0.0135955810546875,
-0.01097869873046875,
-0.02978515625,
0.036956787109375,
0.0163116455078125,
-0.06353759765625,
-0.07073974609375,
-0.0296630859375,
0.034210205... |
euclaise/writingprompts | 2023-09-21T19:12:16.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"arxiv:1805.04833",
"region:us"
] | euclaise | null | null | 0 | 51 | 2023-09-21T18:53:34 | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: prompt
dtype: string
- name: story
dtype: string
splits:
- name: train
num_bytes: 858816216
num_examples: 272600
- name: test
num_bytes: 47681276
num_examples: 15138
- name: validation
num_bytes: 48904993
num_examples: 15620
download_size: 605049830
dataset_size: 955402485
---
# Dataset Card for "writingprompts"
WritingPrompts dataset, as used in [Hierarchical Neural Story Generation](https://arxiv.org/pdf/1805.04833.pdf). Parsed from [the archive](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) | 837 | [
[
-0.016143798828125,
-0.023284912109375,
0.00800323486328125,
0.0228271484375,
-0.0144500732421875,
-0.0166778564453125,
0.0103912353515625,
-0.00360870361328125,
0.0213165283203125,
0.036895751953125,
-0.0816650390625,
-0.062255859375,
-0.028228759765625,
0.... |
juraj-juraj/doc_gen | 2023-09-29T09:10:24.000Z | [
"task_categories:text-generation",
"language:en",
"license:mit",
"region:us"
] | juraj-juraj | null | null | 0 | 51 | 2023-09-28T19:51:32 | ---
language:
- en
license: mit
task_categories:
- text-generation
pretty_name: py_code_doc
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: docstring
dtype: string
- name: function
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 525428666
num_examples: 502378
- name: validation
num_bytes: 624971
num_examples: 459
- name: test
num_bytes: 673898
num_examples: 666
download_size: 198280913
dataset_size: 526727535
---
# Code documentation dataset
This dataset aims leverage usage of lm to automatically generate documenation to undocumented python code. Dataset consists of pairs code and its documenation
Content of dataset is created from CodeSearchNet dataset. | 916 | [
[
-0.01552581787109375,
-0.01560211181640625,
-0.0018682479858398438,
0.00426483154296875,
-0.0001589059829711914,
-0.005443572998046875,
-0.0140533447265625,
0.002719879150390625,
0.0135498046875,
0.061309814453125,
-0.04058837890625,
-0.03948974609375,
0.0078353... |
dmrau/trec_dl19-qrels | 2023-10-09T13:07:40.000Z | [
"region:us"
] | dmrau | null | null | 0 | 51 | 2023-10-06T12:41:51 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: string
splits:
- name: test
num_bytes: 242652
num_examples: 9260
download_size: 0
dataset_size: 242652
---
# Dataset Card for "trec_dl19-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 508 | [
[
-0.041534423828125,
-0.0011577606201171875,
0.015167236328125,
0.0130157470703125,
-0.019927978515625,
0.030609130859375,
0.032257080078125,
-0.00885772705078125,
0.04522705078125,
0.031494140625,
-0.0728759765625,
-0.06048583984375,
-0.0308074951171875,
-0.... |
kaist-ai/Feedback-Collection | 2023-10-14T14:53:22.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"arxiv:2310.08491",
"region:us"
] | kaist-ai | null | null | 16 | 51 | 2023-10-13T01:17:17 | ---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- en
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: "new_feedback_collection.json"
---
## Dataset Description
- **Homepage:https://github.com/kaistAI/Prometheus**
- **Repository:https://github.com/kaistAI/Prometheus**
- **Paper:https://arxiv.org/abs/2310.08491**
- **Point of Contact:seungone@kaist.ac.kr**
# Dataset Card
### Dataset Summary
The Feedback Collection is a dataset designed to induce fine-grained evaluation capabilities into language models.\\

Recently, proprietary LLMs (e.g., GPT-4) have been used to evaluate long-form responses. In our experiments, we found that open-source LMs are not capable of evaluating long-form responses, showing low correlation with both human evaluators and GPT-4.\\
In our paper, we found that by (1) fine-tuning feedback generated by GPT-4 and (2) including the appropriate reference materials (reference answers & score rubrics), we can effectively induce fine-grained evaluation into open-source LMs.
The Feedback Collection provides 1K score rubrics, 20K instructions & reference answers, 100K responses & feedback (20K for each score in the range 1-5).\\
Experimental results show that Prometheus (a LM obtained by fine-tuning Llama-2-Chat on the Feedback Collection) can function as an evaluator in both an absolute scoring setting and a ranking scoring setting.
### Languages
English
## Dataset Structure
* instruction: The input that is given to the evaluator LM. It includes the instruction & response to evaluate, the reference answer, the score rubric.
* output: The output that the evaluator LM should generate. It includes the feedback and score decision divided by a phrase ```[RESULT]```.
* orig```_```instruction: The instruction to be evaluated. Note that this differs with the instruction that includes all the components.
* orig```_```response: The response to be evaluated.
* orig```_```reference```_```answer: A reference answer to the orig```_```instruction.
* orig```_```criteria: The score criteria used to evaluate the orig```_``` response.
* orig```_```score1```_```description: A description of when to give a score of 1 to the orig```_```response.
* orig```_```score2```_```description: A description of when to give a score of 2 to the orig```_```response.
* orig```_```score3```_```description: A description of when to give a score of 3 to the orig```_```response.
* orig```_```score4```_```description: A description of when to give a score of 4 to the orig```_```response.
* orig```_```score5```_```description: A description of when to give a score of 5 to the orig```_```response.
* orig```_```feedback: A feedback that critiques the orig```_```response.
* orig```_```score: An integer between 1 and 5 given to the orig```_```response.
In our paper, we trained the input using the following prompt format (already processed in the 'instruction'):
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{orig_instruction}
###Response to evaluate:
{orig_response}
###Reference Answer (Score 5):
{orig_reference_answer}
###Score Rubrics:
[{orig_criteria}]
Score 1: {orig_score1_description}
Score 2: {orig_score2_description}
Score 3: {orig_score3_description}
Score 4: {orig_score4_description}
Score 5: {orig_score5_description}
###Feedback:
```
The following prompt format (already processed in the 'output') was used to train the evaluator LM:
```
{orig_feedback}
[RESULT] {orig_score}
```
Then during evaluation, we parsed the prediction after the phrase ```[RESULT]```.
### Data Splits
| name | train |
|-------------------|------:|
|Feedback-Collection|99,952|
### Citation Information
If you find the following model helpful, please consider citing our paper!
```bibtex
@misc{kim2023prometheus,
title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
year={2023},
eprint={2310.08491},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 4,927 | [
[
-0.0307769775390625,
-0.063720703125,
0.03643798828125,
0.0033283233642578125,
-0.00801849365234375,
-0.02935791015625,
-0.037994384765625,
-0.0361328125,
0.006626129150390625,
0.030487060546875,
-0.031341552734375,
-0.047393798828125,
-0.032806396484375,
0.... |
Lajavaness/STS14-fr | 2023-10-19T23:13:23.000Z | [
"region:us"
] | Lajavaness | null | null | 1 | 51 | 2023-10-19T23:13:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mugithi/ubuntu_question_answer_jsonl | 2023-10-21T19:29:54.000Z | [
"region:us"
] | mugithi | null | null | 1 | 51 | 2023-10-21T19:23:03 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2073677
num_examples: 12100
- name: test
num_bytes: 882250
num_examples: 5186
download_size: 0
dataset_size: 2955927
---
# Dataset Card for "ubuntu_question_answer_jsonl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 464 | [
[
-0.04705810546875,
-0.042816162109375,
0.0238494873046875,
0.00640106201171875,
-0.019256591796875,
0.0301055908203125,
0.010986328125,
0.00485992431640625,
0.04644775390625,
0.03759765625,
-0.045013427734375,
-0.046539306640625,
-0.0240020751953125,
-0.0032... |
finiteautomata/prueba-arg | 2023-10-27T04:34:43.000Z | [
"region:us"
] | finiteautomata | null | null | 0 | 51 | 2023-10-27T02:35:34 | ---
dataset_info:
features:
- name: tweet_id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: user
dtype: string
- name: body
dtype: string
- name: created_at
dtype: string
- name: comments
list:
- name: created_at
dtype: string
- name: text
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 909617906
num_examples: 73423
download_size: 0
dataset_size: 909617906
---
# Dataset Card for "prueba-arg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 708 | [
[
-0.04345703125,
-0.036895751953125,
-0.004138946533203125,
0.0203857421875,
-0.033660888671875,
-0.00730133056640625,
0.004878997802734375,
-0.00826263427734375,
0.07061767578125,
0.0411376953125,
-0.049774169921875,
-0.05902099609375,
-0.04534912109375,
-0.... |
hippocrates/CitationGPT_train | 2023-10-30T21:01:22.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 51 | 2023-10-30T20:50:52 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 443729530
num_examples: 119360
- name: valid
num_bytes: 57232474
num_examples: 15480
- name: test
num_bytes: 51863078
num_examples: 14000
download_size: 208907031
dataset_size: 552825082
---
# Dataset Card for "CitationGPT_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 634 | [
[
-0.035308837890625,
-0.004302978515625,
0.0247650146484375,
0.0302276611328125,
-0.01229095458984375,
-0.0096588134765625,
0.017608642578125,
-0.0029201507568359375,
0.0411376953125,
0.00719451904296875,
-0.04638671875,
-0.0302581787109375,
-0.0439453125,
-0... |
merve/poetry | 2022-10-25T09:50:55.000Z | [
"region:us"
] | merve | null | null | 14 | 50 | 2022-03-02T23:29:22 | # Dataset Card for poetry
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** poetryfoundation.com
- **Repository:** https://www.kaggle.com/ishnoor/poetry-analysis-with-machine-learning
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
It contains poems from subjects: Love, Nature and Mythology & Folklore that belong to two periods namely Renaissance and Modern
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Has 5 columns:
- Content
- Author
- Poem name
- Age
- Type
### Data Splits
Only training set
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: poetry
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
--- | 2,810 | [
[
-0.0179443359375,
-0.0258941650390625,
0.00799560546875,
0.0173187255859375,
-0.0292510986328125,
-0.004955291748046875,
-0.0304718017578125,
-0.03814697265625,
0.037261962890625,
0.04608154296875,
-0.0623779296875,
-0.07586669921875,
-0.0570068359375,
0.007... |
philschmid/germeval18 | 2022-02-28T17:14:55.000Z | [
"region:us"
] | philschmid | null | null | 3 | 50 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.014984130859375,
0.05718994140625,
0.0287933349609375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.005077362060546875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.0149993896484375,
-0.0604248046875,
0.037902... |
SetFit/amazon_reviews_multi_fr | 2022-03-23T15:45:44.000Z | [
"region:us"
] | SetFit | null | null | 0 | 50 | 2022-03-13T02:48:20 | #amazon reviews multi french
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the French language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | 308 | [
[
-0.057281494140625,
-0.031494140625,
0.0006017684936523438,
0.051971435546875,
-0.0110015869140625,
0.00572967529296875,
0.0008726119995117188,
-0.032257080078125,
0.046630859375,
0.0782470703125,
-0.06744384765625,
-0.0233612060546875,
-0.01776123046875,
0.... |
bigscience-data/roots_zh-tw_wikipedia | 2022-12-12T12:09:12.000Z | [
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 3 | 50 | 2022-05-18T09:20:00 | ---
language: zh
language_bcp47:
- zh-TW
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh-tw_wikipedia
# wikipedia
- Dataset uid: `wikipedia`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 3.2299 % of total
- 4.2071 % of en
- 5.6773 % of ar
- 3.3416 % of fr
- 5.2815 % of es
- 12.4852 % of ca
- 0.4288 % of zh
- 0.4286 % of zh
- 5.4743 % of indic-bn
- 8.9062 % of indic-ta
- 21.3313 % of indic-te
- 4.4845 % of pt
- 4.0493 % of indic-hi
- 11.3163 % of indic-ml
- 22.5300 % of indic-ur
- 4.4902 % of vi
- 16.9916 % of indic-kn
- 24.7820 % of eu
- 11.6241 % of indic-mr
- 9.8749 % of id
- 9.3489 % of indic-pa
- 9.4767 % of indic-gu
- 24.1132 % of indic-as
- 5.3309 % of indic-or
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
#### Filters applied to: zh
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-or
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
| 3,662 | [
[
-0.044708251953125,
-0.0384521484375,
0.0236358642578125,
0.01169586181640625,
-0.01360321044921875,
-0.0050048828125,
-0.01490020751953125,
-0.01055908203125,
0.045013427734375,
0.0218048095703125,
-0.053558349609375,
-0.060028076171875,
-0.044921875,
0.030... |
kqsong/OASum | 2023-07-03T21:02:23.000Z | [
"task_categories:summarization",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-sa-3.0",
"summarization",
"Wikipedia",
"arxiv:2212.09233",
"region:us"
] | kqsong | null | null | 1 | 50 | 2022-12-27T22:27:17 | ---
license: cc-by-sa-3.0
language:
- en
tags:
- summarization
- Wikipedia
size_categories:
- 1M<n<10M
task_categories:
- summarization
---
# Dataset Card for OASum Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Usage](#dataset-usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [OASum Dataset repository](https://github.com/tencent-ailab/OASum)
- **Paper:** [OASum: Large-Scale Open Domain Aspect-based Summarization](https://arxiv.org/pdf/2212.09233.pdf)
The OASum Dataset is an English-language dataset containing over 3.6M document, aspect, and summary triplets.
## Dataset Usage
You can directly download it with huggingface datasets.
``` python
from datasets import load_dataset
dataset = load_dataset("kqsong/OASum")
```
## Dataset Structure
### Data Instances
For each instance, there is a list of strings for the document, a list of strings for the summary, a string for the document title, a string for the aspect and a list of indices for the sentences in the corresponding section.
```json
{
"title": "Ker's WingHouse Bar & Grill",
"document":[
"After Clearwater, Florida chicken wing pioneering restaurant chain Hooters began rapidly expanding, Florida based, Canadian-born restaurant entrepreneur Ed Burnett saw the opportunity.",
"Burnett secured the rights to a closed restaurant (\"Knockers\") and opened \"The WingHouse\" restaurant at 7369 Ulmerton Road, Largo, Florida, a high traffic corridor.",
"He strategically selected the restaurant in between where people work (commercial real estate) and live (residential real estate), to appeal to the local lunch crowd and family dining crowd.",
"This flagship location proved to be a success soon after launching and is the model that the chain expanded on.",
"Burnett, looking to expand to additional locations, accepted a financing partner (Crawford Ker) during this time frame, to open additional locations and beyond.",
"Burnett's goal was to open 20 to 50 locations, and then sell the chain to a larger restaurant chain or investors.",
"Burnett would ultimately regret his choice of investor.","In 1992, Ker retired from the NFL and took a job selling cars at a local dealer.",
"In 1994, he invested half interest in a Largo, Florida wing restaurant called, \"Wing House\" that imitated Hooters.",
"The restaurant was always The Wing House, and the atmosphere was always toned down to make it more family friendly.",
"The restaurant did well and two additional locations were opened in the Tampa Bay area in the following three years.",
"Ker won a $1.2-million jury award from Hooters in late 2004, which had sued him for trademark violations for allegedly using their uniforms and decor.",
"After a three-week trial in which lawyers discussed hula hoops, surfboards, scrunchy socks, pantyhose, and something called \"vicarious sexual recreation\", the jury ruled that no trademark infringement existed and Hooters was penalized for their frivolous lawsuit.",
"Hooters appealed the decision, but in June, 2006, the 11th U.S. Circuit Court of Appeals in Atlanta upheld the verdict.",
"As of 2007, the company had 1,700 employees at 22 locations with revenue of nearly $60 million.",
"Ker attended, and the company participated in, the 2007 National Buffalo Wing Festival and placed first in the \"traditional x-hot sauce\" category and gained some national recognition.",
"On June 4, 2008 the company announced the launch of its national franchise program.",
"In mid-2008 the chain operated 19 locations in Florida and Texas and expected to add six franchises by the end of 2008, and 48 by 2011.",
"The initial focus was for franchises in the Southeastern US.",
"WingHouses feature several amenities that differ from other wing restaurants, including Hooters.",
"There is a full liquor bar in every store, sports memorabilia line the walls instead of NASCAR and most locations include a game room.",
"Super Bowl XLIII in Tampa, Florida attracted the rich and famous; WingHouse hosted three events to raise money for charity."
],
"aspect": "Opening",
"aspect_sents": [0,1,2,3,4,5,6,7,8,9,10],
"summary":[
"WingHouse Bar & Grill (formerly Ker\u2019s WingHouse Bar & Grill) is a restaurant chain based in Florida, created and founded by Ed Burnett, a Canadian restaurant entrepreneur.",
"After opening his first WingHouse location, Burnett sought out investors to open additional WingHouse locations.",
"Burnett accepted investor Crawford Ker (a former National Football League player) to assist financing the expansion."
]
}
```
The average token count for the articles and the highlights are provided below:
| Feature | Mean Token Count |
| ---------- | ---------------- |
| Document | 1,612 |
| Summary | 40 |
### Data Fields
- `title`: a string, containing the original Wikipedia title.
- `document`: a list of sentences, containing the original content in the Wikipedia sections except the first abstract section.
- `aspect`: a string, containing the section name and its parent section names.
- `aspect_sents`: a list of indices, representing the sentences in the `aspect` section.
- `summary`: a list of sentences, the corresponding aspect-based summary for the document.
### Data Splits
The OASum dataset has 3 splits: _train_, _valid_, and _test_. Below are the statistics for the Version 1.0.0 of the dataset.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 3,523,986 |
| Validation | 111,578 |
| Test | 112,005 |
## Additional Information
### Licensing Information
The OASum Dataset version 1.0.0 is released under the [CC-BY-SA-3.0 License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
### Citation Information
```
@article{yang2022oasum,
title={Oasum: Large-scale open domain aspect-based summarization},
author={Yang, Xianjun and Song, Kaiqiang and Cho, Sangwoo and Wang, Xiaoyang and Pan, Xiaoman and Petzold, Linda and Yu, Dong},
journal={arXiv preprint arXiv:2212.09233},
year={2022}
}
``` | 6,603 | [
[
-0.037445068359375,
-0.060516357421875,
0.01904296875,
-0.0005364418029785156,
-0.00815582275390625,
-0.0111541748046875,
0.0102081298828125,
-0.037353515625,
0.041229248046875,
0.06072998046875,
-0.0261993408203125,
-0.04339599609375,
-0.01267242431640625,
... |
Norod78/microsoft-fluentui-emoji-512-whitebg | 2023-07-16T12:12:01.000Z | [
"task_categories:unconditional-image-generation",
"task_categories:text-to-image",
"size_categories:n<10K",
"language:en",
"license:mit",
"emoji",
"fluentui",
"region:us"
] | Norod78 | null | null | 3 | 50 | 2023-01-01T09:03:35 | ---
language: en
license: mit
size_categories:
- n<10K
task_categories:
- unconditional-image-generation
- text-to-image
pretty_name: Microsoft FluentUI Emoji 512x512 White Background
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 329173985.708
num_examples: 7564
download_size: 338676474
dataset_size: 329173985.708
tags:
- emoji
- fluentui
---
# Dataset Card for "microsoft-fluentui-emoji-512-whitebg"
[svg and their file names were converted to images and text from Microsoft's fluentui-emoji repo](https://github.com/microsoft/fluentui-emoji) | 641 | [
[
-0.0213775634765625,
-0.016510009765625,
-0.00005459785461425781,
0.0080718994140625,
-0.04449462890625,
0.016632080078125,
-0.0168914794921875,
-0.0228271484375,
0.03497314453125,
0.044189453125,
-0.06201171875,
-0.056060791015625,
-0.035064697265625,
0.017... |
Dahoas/code-review-instruct-critique-revision-python | 2023-01-08T15:22:19.000Z | [
"region:us"
] | Dahoas | null | null | 5 | 50 | 2023-01-08T15:22:14 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.014984130859375,
0.05718994140625,
0.0287933349609375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.005077362060546875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.0149993896484375,
-0.0604248046875,
0.037902... |
heegyu/news-category-dataset | 2023-02-09T08:10:48.000Z | [
"license:cc-by-4.0",
"region:us"
] | heegyu | null | null | 0 | 50 | 2023-02-09T08:08:22 | ---
license: cc-by-4.0
---
Dataset from https://www.kaggle.com/datasets/rmisra/news-category-dataset | 101 | [
[
-0.0147247314453125,
-0.028594970703125,
0.0151824951171875,
-0.0011262893676757812,
-0.0190887451171875,
0.00366973876953125,
0.0239715576171875,
0.0006618499755859375,
0.047943115234375,
0.06549072265625,
-0.07257080078125,
-0.046783447265625,
-0.0310363769531... |
intfloat/wikipedia | 2023-04-23T08:36:49.000Z | [
"size_categories:100M<n<1B",
"region:us"
] | intfloat | \
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | \
@ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | 1 | 50 | 2023-03-23T09:12:08 | ---
size_categories:
- 100M<n<1B
---
### Dataset Summary
This dataset is based on [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia).
The main difference is that we add `Section::::` prefix to each section title to keep the section structure information.
We also use `:` to join the hierarchical section titles.
Following is an example.
```text
Alison Jane Horner (born June 1966) is a British businesswoman, and, until it was sold in 2020, was the CEO of the Asian arm of the Tesco supermarket chain.
Section::::Early life
Alison Jane Horner was born in June 1966. She earned a bachelor's degree in chemistry from the University of Manchester, and an MBA from Manchester Business School.
Section::::Career
Section::::Career:Tesco
Horner joined Tesco as a personnel manager in 1999 and was on Tesco's executive committee from 2011.
In October 2013, Horner became a founding member of The Guardian's Women in Leadership network. in 2015, she became a member of Alliance Manchester Business School's advisory board.
Horner was Tesco' chief people officer (chief human resources officer) of Tesco until May 2018, when she was promoted to be chief executive of Tesco's Asia business in Malaysia and Thailand, until it was sold in late 2020. She was set to step down in February 2021 after 22 years with Tesco.
Section::::Career:Carillion non-executive role
Horner was a non-executive director of Carillion from December 2013, chairing the remuneration committee from June 2014. As of 30 December 2016 her basic compensation was £61,000. After the company went into liquidation in January 2018, Horner was one of the non-executive directors who gave evidence to the House of Commons Business and Work and Pensions select committees on 6 February 2018. In the final report of the Parliamentary Inquiry, published on 16 May 2018, Horner was criticised by MPs; the report concluded:
"... Alison Horner presided over growing salaries and bonuses at the top of the company as its performance faltered. In her evidence to us, she sought to justify her approach by pointing to industry standards, the guidance of advisors, and conversations with shareholders. She failed to demonstrate to us any sense of challenge to the advice she was given, any concern about the views of stakeholders, or any regret at the largesse at the top of Carillion. Ms Horner continues to hold the role of Chief People Officer of Tesco, where she has responsibilities to more than half a million employees. We hope that, in that post, she will reflect on the lessons learned from Carillion and her role in its collapse."
In January 2021, the Insolvency Service said it would seek to ban eight former Carillion directors, including Horner, from holding senior boardroom positions.
Section::::References
Living people
1966 births
British businesspeople in retailing
Tesco people
Alumni of the University of Manchester
Alumni of the Manchester Business School
Carillion people
```
### Data Fields
- `title`: a `string` feature.
- `text`: a `string` feature.
### How to use this dataset
To load this dataset you need to install these first:
```shell
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```
Then, you can load any subset of Wikipedia per language and per date this way:
```python
from datasets import load_dataset
dataset = load_dataset("intfloat/wikipedia", language="en", date="20230401")
```
For more information,
please check out [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia).
## Supported Languages
```
aa
ab
ace
ady
af
ak
als
am
an
ang
ar
arc
arz
as
ast
atj
av
ay
az
azb
ba
bar
bat-smg
bcl
be
be-x-old
bg
bh
bi
bjn
bm
bn
bo
bpy
br
bs
bug
bxr
ca
cbk-zam
cdo
ce
ceb
ch
cho
chr
chy
ckb
co
cr
crh
cs
csb
cu
cv
cy
da
de
din
diq
dsb
dty
dv
dz
ee
el
eml
en
eo
es
et
eu
ext
fa
ff
fi
fiu-vro
fj
fo
fr
frp
frr
fur
fy
ga
gag
gan
gd
gl
glk
gn
gom
gor
got
gu
gv
ha
hak
haw
he
hi
hif
ho
hr
hsb
ht
hu
hy
ia
id
ie
ig
ii
ik
ilo
inh
io
is
it
iu
ja
jam
jbo
jv
ka
kaa
kab
kbd
kbp
kg
ki
kj
kk
kl
km
kn
ko
koi
krc
ks
ksh
ku
kv
kw
ky
la
lad
lb
lbe
lez
lfn
lg
li
lij
lmo
ln
lo
lrc
lt
ltg
lv
mai
map-bms
mdf
mg
mh
mhr
mi
min
mk
ml
mn
mr
mrj
ms
mt
mus
mwl
my
myv
mzn
na
nah
nap
nds
nds-nl
ne
new
ng
nl
nn
no
nov
nrm
nso
nv
ny
oc
olo
om
or
os
pa
pag
pam
pap
pcd
pdc
pfl
pi
pih
pl
pms
pnb
pnt
ps
pt
qu
rm
rmy
rn
ro
roa-rup
roa-tara
ru
rue
rw
sa
sah
sat
sc
scn
sco
sd
se
sg
sh
si
simple
sk
sl
sm
sn
so
sq
sr
srn
ss
st
stq
su
sv
sw
szl
ta
tcy
te
tet
tg
th
ti
tk
tl
tn
to
tpi
tr
ts
tt
tum
tw
ty
tyv
udm
ug
uk
ur
uz
ve
vec
vep
vi
vls
vo
wa
war
wo
wuu
xal
xh
xmf
yi
yo
za
zea
zh
zh-classical
zh-min-nan
zh-yue
zu
``` | 4,624 | [
[
-0.0021266937255859375,
-0.0033588409423828125,
0.01425933837890625,
-0.01117706298828125,
-0.01763916015625,
-0.006168365478515625,
0.0111236572265625,
-0.0244598388671875,
0.036956787109375,
0.037017822265625,
-0.03363037109375,
-0.045013427734375,
-0.05090332... |
roneneldan/TinyStoriesInstruct | 2023-05-18T21:20:35.000Z | [
"region:us"
] | roneneldan | null | null | 18 | 50 | 2023-05-12T23:44:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
llm-book/jawiki-paragraphs | 2023-06-03T03:04:43.000Z | [
"size_categories:1M<n<10M",
"language:ja",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | llm-book | null | null | 0 | 50 | 2023-06-03T03:04:05 | ---
language:
- ja
size_categories:
- 1M<n<10M
license:
- cc-by-sa-3.0
- gfdl
dataset_info:
features:
- name: id
dtype: string
- name: pageid
dtype: int64
- name: revid
dtype: int64
- name: paragraph_index
dtype: int64
- name: title
dtype: string
- name: section
dtype: string
- name: text
dtype: string
- name: html_tag
dtype: string
splits:
- name: train
num_bytes: 4417130987
num_examples: 9668476
download_size: 1489512230
dataset_size: 4417130987
---
# Dataset Card for llm-book/jawiki-paragraphs
書籍『大規模言語モデル入門』で使用する Wikipedia 段落のデータセットです。
GitHub リポジトリ [singletongue/wikipedia-utils](https://github.com/singletongue/wikipedia-utils) で公開されているデータセットを利用しています。
## Licence
本データセットで使用している Wikipedia のコンテンツは、[クリエイティブ・コモンズ表示・継承ライセンス 3.0 (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) および [GNU 自由文書ライセンス (GFDL)](https://www.gnu.org/licenses/fdl.html) の下に配布されているものです。
| 958 | [
[
-0.04180908203125,
-0.054534912109375,
0.0173492431640625,
0.00815582275390625,
-0.0653076171875,
-0.023406982421875,
-0.006679534912109375,
-0.0038776397705078125,
0.01415252685546875,
0.036468505859375,
-0.060760498046875,
-0.0760498046875,
-0.0148391723632812... |
HydraLM/physics_dataset_standardized | 2023-07-27T17:17:05.000Z | [
"region:us"
] | HydraLM | null | null | 2 | 50 | 2023-07-27T17:16:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
PetraAI/PetraAI | 2023-09-14T21:04:52.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",... | PetraAI | null | null | 2 | 50 | 2023-08-01T01:34:38 | ---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- translation
- summarization
- conversational
- feature-extraction
- text-generation
- text2text-generation
- fill-mask
- sentence-similarity
- text-to-speech
- automatic-speech-recognition
- audio-to-audio
- audio-classification
- voice-activity-detection
- depth-estimation
- image-classification
- object-detection
- image-segmentation
- text-to-image
- image-to-text
- image-to-image
- unconditional-image-generation
- video-classification
- reinforcement-learning
- robotics
- tabular-classification
- tabular-regression
- tabular-to-text
- table-to-text
- multiple-choice
- text-retrieval
- time-series-forecasting
- text-to-video
- visual-question-answering
- zero-shot-image-classification
- graph-ml
language:
- ar
- en
tags:
- chemistry
- biology
- finance
- legal
- music
- art
- code
- climate
- medical
pretty_name: PETRA
size_categories:
- 1M<n<10M
---
# PETRA
## Overview
PETRA is a multilingual dataset for training and evaluating AI systems on a diverse range of tasks across multiple modalities. It contains data in Arabic and English for tasks including translation, summarization, question answering, and more.
## Dataset Structure
- Data is separated by language into `/ar` and `/en` directories
- Within each language directory, data is separated by task into subdirectories
- Tasks include:
- Translation
- Summarization
- Conversational
- Feature extraction
- Zero-shot classification
- Text generation
- Fill mask
- Sentence similarity
- Text-to-speech
- Automatic speech recognition
- Text classification
- Token classification
- Table question answering
- Question answering
- Text2text generation
- Audio-to-audio
- Audio classification
- Voice activity detection
- Depth estimation
- Image classification
- Object detection
- Image segmentation
- Text-to-image
- Image-to-text
- Image-to-image
- Unconditional image generation
- Reinforcement learning
- Video classification
- Robotics
- Tabular classification
- Tabular regression
- Table-to-text
- Multiple choice
- Text retrieval
- Tabular-to-text
- Text-to-video
- Time series forecasting
- Visual question answering
- Zero-shot image classification
- Graph ML
## Dataset Tags
- code
- art
- chemistry
- biology
- finance
- legal
- music
- climate
- medical
## Dataset Size
1M < n < 10M samples
## Licenses
Apache 2.0
## Citation
If you use this dataset, please cite it as:
[cite paper, arXiv, etc]
@article{PetraAI2022PetraAI,
title={PetraAI: A Massive Multilingual Dataset for Machine Learning},
author={First Last and First Last},
journal={arXiv},
year={2022},
url={https://huggingface.co/datasets/PetraAI/PetraAI}
}
## Contact
For any questions, please reach out to [shadilytn@gmail.com]
# Dataset Cards
## What are Dataset Cards?
Each dataset may be documented by the `README.md` file in the repository. This file is called a **dataset card**, and the Hugging Face Hub will render its contents on the dataset’s main page. To inform users about how to responsibly use the data, it’s a good idea to include information about any potential biases within the dataset. Generally, dataset cards help users understand the contents of the dataset and give context for how the dataset should be used.
You can also add dataset metadata to your card. The metadata describes important information about a dataset such as its license, language, and size. It also contains tags to help users discover a dataset on the Hub. Tags are defined in a YAML metadata section at the top of the `README.md` file.
## Dataset card metadata
A dataset repo will render its README.md as a dataset card. To control how the Hub displays the card, you should create a YAML section in the README file to define some metadata. Start by adding three --- at the top, then include all of the relevant metadata, and close the section with another group of --- like the example below:
The metadata that you add to the dataset card enables certain interactions on the Hub. For example:
- Allow users to filter and discover datasets at https://huggingface.co/datasets.
- If you choose a license using the keywords listed in the right column of this table, the license will be displayed on the dataset page.
When creating a README.md file in a dataset repository on the Hub, use Metadata UI to fill the main metadata:
To see metadata fields, see the detailed dataset card metadata specification here.
### Dataset card creation guide
For a step-by-step guide on creating a dataset card, check out the Create a dataset card guide.
Reading through existing dataset cards, such as the ELI5 dataset card, is a great way to familiarize yourself with the common conventions.
### Linking a Paper
If the dataset card includes a link to a paper on arXiv, the Hub will extract the arXiv ID and include it in the dataset tags with the format `arxiv:<PAPER ID>`. Clicking on the tag will let you:
- Visit the Paper page
- Filter for other models on the Hub that cite the same paper.
Read more about paper pages here.
https://huggingface.co/docs/hub/paper-pages | 5,299 | [
[
-0.0509033203125,
-0.044677734375,
0.0032100677490234375,
0.0265655517578125,
-0.01068115234375,
-0.00820159912109375,
-0.004901885986328125,
-0.0260467529296875,
0.0233306884765625,
0.041290283203125,
-0.04815673828125,
-0.07257080078125,
-0.044677734375,
-... |
hugcyp/LCSTS | 2023-08-16T02:48:38.000Z | [
"region:us"
] | hugcyp | null | null | 1 | 50 | 2023-08-16T01:59:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jingwora/unstructured-data-multilingual | 2023-08-19T03:46:36.000Z | [
"region:us"
] | jingwora | null | null | 0 | 50 | 2023-08-18T08:23:41 | ---
dataset_info:
features:
- name: language
dtype: string
- name: id
dtype: string
- name: product_id
dtype: string
- name: category
dtype: string
- name: sub_category
dtype: string
- name: product_name
dtype: string
- name: product_detail
dtype: string
- name: image_files
dtype: string
- name: review
dtype: string
- name: star
dtype: string
- name: sentiment
dtype: string
splits:
- name: en
num_bytes: 11790
num_examples: 24
- name: ja
num_bytes: 10499
num_examples: 24
- name: th
num_bytes: 12716
num_examples: 24
download_size: 34282
dataset_size: 35005
configs:
- config_name: default
data_files:
- split: en
path: data/en-*
- split: ja
path: data/ja-*
- split: th
path: data/th-*
---
# Dataset Card for "unstructured-data-multilingual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,001 | [
[
-0.02789306640625,
-0.026214599609375,
0.01065826416015625,
0.02960205078125,
-0.019622802734375,
0.0225067138671875,
-0.01654052734375,
-0.01531219482421875,
0.056243896484375,
0.032012939453125,
-0.04864501953125,
-0.064453125,
-0.041595458984375,
-0.00907... |
allenai/ValuePrism | 2023-09-08T23:05:50.000Z | [
"size_categories:100K<n<1M",
"language:en",
"not-for-all-audiences",
"arxiv:2309.00779",
"arxiv:2304.03738",
"region:us"
] | allenai | null | null | 2 | 50 | 2023-08-22T20:08:41 | ---
configs:
- config_name: full
data_files: full/*csv
default: true
- config_name: mixture
data_files:
- split: train
path: mixture/*train.csv
- split: val
path: mixture/*val.csv
- split: test
path: mixture/*test.csv
- config_name: generative
data_files:
- split: train
path: generative/*train.csv
- split: val
path: generative/*val.csv
- split: test
path: generative/*test.csv
- config_name: relevance
data_files:
- split: train
path: relevance/*train.csv
- split: val
path: relevance/*val.csv
- split: test
path: relevance/*test.csv
- config_name: explanation
data_files:
- split: train
path: explanation/*train.csv
- split: val
path: explanation/*val.csv
- split: test
path: explanation/*test.csv
- config_name: valence
data_files:
- split: train
path: valence/*train.csv
- split: val
path: valence/*val.csv
- split: test
path: valence/*test.csv
annotations_creators:
- crowdsourced: null
machine-generated: null
language:
- en
pretty_name: ValuePrism
extra_gated_prompt: >-
Access to this dataset is automatically granted upon accepting the [**AI2
ImpACT License - Medium Risk Artifacts (“MR
Agreement”)**](https://allenai.org/licenses/impact-mr) and completing all
fields below.
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the medium risk artifact(s): text
I UNDERSTAND that the dataset is intended for research purposes and not for real-world use-cases: checkbox
I AGREE to the terms and conditions of the MR Agreement above: checkbox
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
tags:
- not-for-all-audiences
size_categories:
- 100K<n<1M
---
# Dataset Card for ValuePrism
## Dataset Description
- **Paper:** https://arxiv.org/abs/2309.00779
- **Demo:** https://kaleido.allen.ai
- **Repository:** https://github.com/tsor13/kaleido
- **Datasheet for Datasets:** https://drive.google.com/file/d/1zDWvO0NljqxBMfDAGW7Jx60Iw54bjsEE/view?usp=sharing
- **License:** https://allenai.org/licenses/impact-mr
- **Point of Contact:** [Taylor Sorensen](mailto:tsor13@cs.washington.edu)
### Dataset Summary
ValuePrism was created 1) to understand what pluralistic human values, rights, and duties are already present in large language models, and 2) to serve as a resource to to support open, value pluralistic modeling (e.g., [Kaleido](https://huggingface.co/tsor13/kaleido-xl)). It contains human-written situations and machine-generated candidate values, rights, duties, along with their valences and post-hoc explanations relating them to the situations.
For additional documentation, see ValuePrism's [Datasheet](https://drive.google.com/file/d/1zDWvO0NljqxBMfDAGW7Jx60Iw54bjsEE/view?usp=sharing).
The dataset was created and intended for research purposes. It is openly released under AI2’s ImpACT license as a medium risk artifact.
### Supported Tasks
The dataset supports 4 tasks:
- **Generation (open-text)**
*What values, rights, and duties are relevant for a situation?*
Generate a value, right, or duty
that could be considered when reasoning about the action. Values are generated one at a time, as opposed to a batch.
- **Relevance (2-way classification)**
*Is a value relevant for a situation?* Some values are more relevant than others.
- **Valence (3-way classification)**
*Does the value support or oppose the action, or might it depend on context?*
Disentangling the valence is critical for understanding how plural considerations may interact with a decision.
- **Explanation (open-text)**
*How does the value relate to the action?* Generating a post-hoc rationale for why a value consideration may relate to a situation.
### Languages
All data is in English.
## Dataset Structure
### Dataset Splits
There are 6 data configurations:
- `full`: The full structured dataset of situations paired with values, rights, and duties paired with GPT-4. Only one split with all of the data.
- `generative`: Generative task train, val, and test splits.
- `relevance`: Relevance task train, val, and test splits.
- `valence`: Valence task train, val, and test splits.
- `explanation`: Explanation task train, val, and test splits.
- `mixture`: Generative, relevance, valence, and explanation tasks combined wtih train, val, and test splits.
### Data Fields
While different configurations have different fields, these are all the corresponding fields in the dataset:
- `situation` (string): A one sentence of a particular scenario or situation. For example, "buying some chocolate for my grandparents".
- `vrd` (string): Type of instance, either "Value", "Right", or "Duty".
- `text` (string): The text of the value, right, or duty. For example, "Honesty", "Right to property", "Duty to protect".
- `explanation` (string): A post-hoc explanation of why the specified value, right, or duty is relevant or important in the given situation. For example, "Buying chocolate for your grandparents can strengthen family connections and show appreciation for your relationship with them."
- `valence` (string): Indicates whether the value, right, or duty supports or opposes the action in the situation, or if it might depend on the context. Either "Supports", "Opposes", or "Either".
- `input` (string): For the seq2seq task (generative, relevance, valence, explanation), the input to the model.
- `output` (string): For the seq2seq task (generative, relevance, valence, explanation), the output of the model.
### Data Splits
All configurations (except for the raw outputs in `full`) have 80%/10%/10% train/validation/test splits.
## Dataset Creation
### Source Data
#### Data Collection
Situations are sourced from the Delphi user demo, and candidate values, rights, duties, their valences, and explanations connecting them to the situations are machine generated by GPT-4.
#### Who are the source language producers?
The situations are sourced from users of the Delphi user demo, for whom we do not have demographic information.
### Personal and Sensitive Information
There is no personal or sensitive information in ValuePrism.
## Considerations for Using the Data
### Social Impact of Dataset
We intend the dataset to be used to enable research and not to be used for real-world use or decision-making.
### Discussion of Biases
The value, right, and duty data was generated by GPT-4, which is known to exhibit [biases](https://arxiv.org/pdf/2304.03738.pdf). Thus, we expect ValuePrism to inherit biases from GPT-4.
That being said, we have tried to prompt the model to output a diversity of values in an attempt to mitigate bias with breadth.
## Additional Information
91% of values, rights, and duties were marked as high-quality by 3/3 annotators, and 87% of valence scores were marked as correct by 3/3 annotators.
Additionally, we perform a human study on the data and do not find large disparities in agreement between demographic groups tested, although future work in this area is a promising direction.
See [our paper] for more details and analysis.
### Licensing Information
ValuePrism is made available under the [**AI2
ImpACT License - Medium Risk Artifacts (“MR
Agreement”)**](https://allenai.org/licenses/impact-mr)
### Citation Information
Please cite [our paper](https://arxiv.org/abs/2309.00779) when using this dataset:
```
@misc{sorensen2023value,
title={Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties},
author={Taylor Sorensen and Liwei Jiang and Jena Hwang and Sydney Levine and Valentina Pyatkin and Peter West and Nouha Dziri and Ximing Lu and Kavel Rao and Chandra Bhagavatula and Maarten Sap and John Tasioulas and Yejin Choi},
year={2023},
eprint={2309.00779},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### Raw Dataset Statistics
The total, number of unique, and average number of generated values, rights, and duties per situation are shown.
| **Type** | **Total** | **Unique** | **Per Situation** |
|--------------|-----------|------------|--------------------|
| **Situations** | 31.0k | 31.0k | 1 |
| **Values** | 97.7k | 4.2k | 3.15 |
| **Rights** | 49.0k | 4.6k | 1.58 |
| **Duties** | 71.6k | 12.8k | 2.31 |
#### Task Dataset Statistics
| | **Relevance** | **Valence** | **Generation** | **Explanation** | **Mixture** |
|---------------|------------|-------------|----------|-----------|-------------|
| **Train** | 349k | 175k | 175k | 175k | 874k |
| **Val** | 44k | 22k | 22k | 22k | 109k |
| **Test** | 44k | 22k | 22k | 22k | 109k |
| **Total** | 437k | 219k | 219k | 219k | 1.1M | | 9,129 | [
[
-0.0188140869140625,
-0.02276611328125,
0.01702880859375,
0.01245880126953125,
-0.0189971923828125,
-0.0213470458984375,
0.0035877227783203125,
-0.02313232421875,
0.004070281982421875,
0.032501220703125,
-0.0440673828125,
-0.037872314453125,
-0.048583984375,
... |
vladisha3000/Icons | 2023-08-31T14:15:43.000Z | [
"region:us"
] | vladisha3000 | null | null | 0 | 50 | 2023-08-31T14:05:13 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2195425.0
num_examples: 999
download_size: 2268449
dataset_size: 2195425.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Icons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 469 | [
[
-0.047088623046875,
-0.0083160400390625,
0.01116180419921875,
0.0094451904296875,
-0.00873565673828125,
0.0082244873046875,
0.0282745361328125,
-0.024627685546875,
0.06585693359375,
0.0299224853515625,
-0.058441162109375,
-0.0513916015625,
-0.04461669921875,
... |
open-llm-leaderboard/details_tiiuae__falcon-180B | 2023-10-24T10:18:04.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 1 | 50 | 2023-09-05T08:24:35 | ---
pretty_name: Evaluation run of tiiuae/falcon-180B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [tiiuae/falcon-180B](https://huggingface.co/tiiuae/falcon-180B) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 66 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 32 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tiiuae__falcon-180B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-24T10:17:51.759984](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-180B/blob/main/results_2023-10-24T10-17-51.759984.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0028313758389261743,\n\
\ \"em_stderr\": 0.0005441551135493806,\n \"f1\": 0.06573301174496615,\n\
\ \"f1_stderr\": 0.0013666874377791776,\n \"acc\": 0.6642104078991223,\n\
\ \"acc_stderr\": 0.011605139145295384\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0028313758389261743,\n \"em_stderr\": 0.0005441551135493806,\n\
\ \"f1\": 0.06573301174496615,\n \"f1_stderr\": 0.0013666874377791776\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.45943896891584535,\n \
\ \"acc_stderr\": 0.01372709301042978\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8689818468823993,\n \"acc_stderr\": 0.009483185280160986\n\
\ }\n}\n```"
repo_url: https://huggingface.co/tiiuae/falcon-180B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|arc:challenge|25_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|arc:challenge|25_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|arc:challenge|25_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|arc:challenge|25_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|arc:challenge|25_2023-09-01T15:12:02.263774.parquet'
- split: 2023_09_25T09_30_46.601936
path:
- '**/details_harness|arc:challenge|25_2023-09-25T09-30-46.601936.parquet'
- split: 2023_09_25T09_42_43.006060
path:
- '**/details_harness|arc:challenge|25_2023-09-25T09-42-43.006060.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-25T09-42-43.006060.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_23T17_29_05.444286
path:
- '**/details_harness|drop|3_2023-10-23T17-29-05.444286.parquet'
- split: 2023_10_24T10_17_51.759984
path:
- '**/details_harness|drop|3_2023-10-24T10-17-51.759984.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T10-17-51.759984.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_23T17_29_05.444286
path:
- '**/details_harness|gsm8k|5_2023-10-23T17-29-05.444286.parquet'
- split: 2023_10_24T10_17_51.759984
path:
- '**/details_harness|gsm8k|5_2023-10-24T10-17-51.759984.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-24T10-17-51.759984.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hellaswag|10_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hellaswag|10_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hellaswag|10_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hellaswag|10_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hellaswag|10_2023-09-01T15:12:02.263774.parquet'
- split: 2023_09_25T11_16_10.146827
path:
- '**/details_harness|hellaswag|10_2023-09-25T11-16-10.146827.parquet'
- split: 2023_09_25T11_28_53.879118
path:
- '**/details_harness|hellaswag|10_2023-09-25T11-28-53.879118.parquet'
- split: 2023_09_25T13_20_00.898508
path:
- '**/details_harness|hellaswag|10_2023-09-25T13-20-00.898508.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-25T13-20-00.898508.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-01T15:12:02.263774.parquet'
- split: 2023_09_25T09_49_01.514206
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T09-49-01.514206.parquet'
- split: 2023_09_25T09_57_43.547983
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T09-57-43.547983.parquet'
- split: 2023_09_25T10_06_12.822356
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T10-06-12.822356.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T10-06-12.822356.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_23T17_29_05.444286
path:
- '**/details_harness|winogrande|5_2023-10-23T17-29-05.444286.parquet'
- split: 2023_10_24T10_17_51.759984
path:
- '**/details_harness|winogrande|5_2023-10-24T10-17-51.759984.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T10-17-51.759984.parquet'
- config_name: original_mmlu_5
data_files:
- split: 2023_09_21T14_54_28.631498
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T14-54-28.631498.parquet'
- split: 2023_09_21T15_14_19.361952
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T15-14-19.361952.parquet'
- split: 2023_09_22T15_08_20.868776
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-08-20.868776.parquet'
- split: 2023_09_22T15_09_58.434868
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-09-58.434868.parquet'
- split: 2023_09_22T15_40_03.532661
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-40-03.532661.parquet'
- split: 2023_09_22T19_13_36.680152
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-13-36.680152.parquet'
- split: 2023_09_22T19_25_51.687929
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-25-51.687929.parquet'
- split: 2023_09_22T19_38_30.055713
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-38-30.055713.parquet'
- split: 2023_09_22T19_56_14.188877
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-56-14.188877.parquet'
- split: 2023_09_22T20_44_00.745184
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T20-44-00.745184.parquet'
- split: 2023_09_22T21_16_36.510313
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-16-36.510313.parquet'
- split: 2023_09_22T21_30_38.663736
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-30-38.663736.parquet'
- split: 2023_09_22T21_39_07.387549
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-39-07.387549.parquet'
- split: 2023_09_22T21_46_48.392874
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-46-48.392874.parquet'
- split: 2023_09_22T22_06_13.624503
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-06-13.624503.parquet'
- split: 2023_09_22T22_21_06.865348
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-21-06.865348.parquet'
- split: 2023_09_23T09_44_24.946036
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- config_name: original_mmlu_high_school_government_and_politics_5
data_files:
- split: 2023_09_21T14_54_28.631498
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T14-54-28.631498.parquet'
- split: 2023_09_21T15_14_19.361952
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T15-14-19.361952.parquet'
- split: 2023_09_22T15_08_20.868776
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-08-20.868776.parquet'
- split: 2023_09_22T15_09_58.434868
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-09-58.434868.parquet'
- split: 2023_09_22T15_40_03.532661
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-40-03.532661.parquet'
- split: 2023_09_22T19_13_36.680152
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-13-36.680152.parquet'
- split: 2023_09_22T19_25_51.687929
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-25-51.687929.parquet'
- split: 2023_09_22T19_38_30.055713
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-38-30.055713.parquet'
- split: 2023_09_22T19_56_14.188877
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-56-14.188877.parquet'
- split: 2023_09_22T20_44_00.745184
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T20-44-00.745184.parquet'
- split: 2023_09_22T21_16_36.510313
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-16-36.510313.parquet'
- split: 2023_09_22T21_30_38.663736
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-30-38.663736.parquet'
- split: 2023_09_22T21_39_07.387549
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-39-07.387549.parquet'
- split: 2023_09_22T21_46_48.392874
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-46-48.392874.parquet'
- split: 2023_09_22T22_06_13.624503
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-06-13.624503.parquet'
- split: 2023_09_22T22_21_06.865348
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-21-06.865348.parquet'
- split: 2023_09_23T09_44_24.946036
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- config_name: results
data_files:
- split: 2023_09_21T14_54_28.631498
path:
- results_2023-09-21T14-54-28.631498.parquet
- split: 2023_09_21T15_14_19.361952
path:
- results_2023-09-21T15-14-19.361952.parquet
- split: 2023_09_22T15_08_20.868776
path:
- results_2023-09-22T15-08-20.868776.parquet
- split: 2023_09_22T15_09_58.434868
path:
- results_2023-09-22T15-09-58.434868.parquet
- split: 2023_09_22T15_40_03.532661
path:
- results_2023-09-22T15-40-03.532661.parquet
- split: 2023_09_22T19_13_36.680152
path:
- results_2023-09-22T19-13-36.680152.parquet
- split: 2023_09_22T19_25_51.687929
path:
- results_2023-09-22T19-25-51.687929.parquet
- split: 2023_09_22T19_38_30.055713
path:
- results_2023-09-22T19-38-30.055713.parquet
- split: 2023_09_22T19_56_14.188877
path:
- results_2023-09-22T19-56-14.188877.parquet
- split: 2023_09_22T20_44_00.745184
path:
- results_2023-09-22T20-44-00.745184.parquet
- split: 2023_09_22T21_16_36.510313
path:
- results_2023-09-22T21-16-36.510313.parquet
- split: 2023_09_22T21_30_38.663736
path:
- results_2023-09-22T21-30-38.663736.parquet
- split: 2023_09_22T21_39_07.387549
path:
- results_2023-09-22T21-39-07.387549.parquet
- split: 2023_09_22T21_46_48.392874
path:
- results_2023-09-22T21-46-48.392874.parquet
- split: 2023_09_22T22_06_13.624503
path:
- results_2023-09-22T22-06-13.624503.parquet
- split: 2023_09_22T22_21_06.865348
path:
- results_2023-09-22T22-21-06.865348.parquet
- split: 2023_09_23T09_44_24.946036
path:
- results_2023-09-23T09-44-24.946036.parquet
- split: 2023_09_25T09_30_46.601936
path:
- results_2023-09-25T09-30-46.601936.parquet
- split: 2023_09_25T09_42_43.006060
path:
- results_2023-09-25T09-42-43.006060.parquet
- split: 2023_09_25T09_49_01.514206
path:
- results_2023-09-25T09-49-01.514206.parquet
- split: 2023_09_25T09_57_43.547983
path:
- results_2023-09-25T09-57-43.547983.parquet
- split: 2023_09_25T10_06_12.822356
path:
- results_2023-09-25T10-06-12.822356.parquet
- split: 2023_09_25T11_16_10.146827
path:
- results_2023-09-25T11-16-10.146827.parquet
- split: 2023_09_25T11_28_53.879118
path:
- results_2023-09-25T11-28-53.879118.parquet
- split: 2023_09_25T13_20_00.898508
path:
- results_2023-09-25T13-20-00.898508.parquet
- split: 2023_10_23T17_29_05.444286
path:
- results_2023-10-23T17-29-05.444286.parquet
- split: 2023_10_24T10_17_51.759984
path:
- results_2023-10-24T10-17-51.759984.parquet
- split: latest
path:
- results_2023-10-24T10-17-51.759984.parquet
---
# Dataset Card for Evaluation run of tiiuae/falcon-180B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/tiiuae/falcon-180B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [tiiuae/falcon-180B](https://huggingface.co/tiiuae/falcon-180B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 66 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 32 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_tiiuae__falcon-180B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T10:17:51.759984](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-180B/blob/main/results_2023-10-24T10-17-51.759984.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493806,
"f1": 0.06573301174496615,
"f1_stderr": 0.0013666874377791776,
"acc": 0.6642104078991223,
"acc_stderr": 0.011605139145295384
},
"harness|drop|3": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493806,
"f1": 0.06573301174496615,
"f1_stderr": 0.0013666874377791776
},
"harness|gsm8k|5": {
"acc": 0.45943896891584535,
"acc_stderr": 0.01372709301042978
},
"harness|winogrande|5": {
"acc": 0.8689818468823993,
"acc_stderr": 0.009483185280160986
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 104,478 | [
[
-0.033782958984375,
-0.050079345703125,
0.015380859375,
0.0169219970703125,
-0.0037841796875,
0.0118560791015625,
-0.0236968994140625,
-0.010528564453125,
0.03619384765625,
0.03851318359375,
-0.053955078125,
-0.06488037109375,
-0.048065185546875,
0.013679504... |
jpawan33/fkr30k-image-captioning-dataset | 2023-09-09T04:17:11.000Z | [
"region:us"
] | jpawan33 | null | null | 1 | 50 | 2023-09-06T19:00:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1625135945.608
num_examples: 31782
download_size: 1621386563
dataset_size: 1625135945.608
---
# Dataset Card for "fkr30k-image-captioning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 510 | [
[
-0.04351806640625,
0.01383209228515625,
0.0055084228515625,
0.033660888671875,
-0.03741455078125,
0.0080413818359375,
0.0186614990234375,
-0.008514404296875,
0.0338134765625,
0.03448486328125,
-0.0675048828125,
-0.05096435546875,
-0.0374755859375,
0.00076675... |
yzhuang/autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T03:05:45.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 50 | 2023-09-08T03:05:03 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 873878506
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 855 | [
[
-0.0169830322265625,
-0.0183563232421875,
0.0272674560546875,
0.01285552978515625,
-0.01224517822265625,
0.00244140625,
0.0450439453125,
-0.0189208984375,
0.060089111328125,
0.03192138671875,
-0.062469482421875,
-0.045867919921875,
-0.049346923828125,
0.0009... |
yzhuang/autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T04:55:50.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 50 | 2023-09-08T04:55:18 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 667958200
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 872 | [
[
-0.029205322265625,
-0.01258087158203125,
0.01436614990234375,
0.0128936767578125,
-0.01837158203125,
0.0095367431640625,
0.03973388671875,
-0.004123687744140625,
0.0447998046875,
0.03277587890625,
-0.050872802734375,
-0.047882080078125,
-0.0380859375,
0.002... |
MU-NLPC/Calc-mawps | 2023-10-30T15:55:30.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"math world problems",
"math",
"arithmetics",
"arxiv:2305.15017",
"region:us"
] | MU-NLPC | null | null | 0 | 50 | 2023-09-08T21:19:20 | ---
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- text-generation
tags:
- math world problems
- math
- arithmetics
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: expression
dtype: string
splits:
- name: train
num_bytes: 298347
num_examples: 1089
- name: validation
num_bytes: 285321
num_examples: 1040
- name: test
num_bytes: 142648
num_examples: 520
download_size: 0
dataset_size: 726316
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: expression
dtype: string
splits:
- name: train
num_bytes: 1000546
num_examples: 3636
- name: test
num_bytes: 142648
num_examples: 520
- name: validation
num_bytes: 285321
num_examples: 1040
download_size: 128730
dataset_size: 1428515
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: train
path: original-splits/train-*
- split: test
path: original-splits/test-*
- split: validation
path: original-splits/validation-*
---
# Dataset Card for Calc-MAWPS
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from <https://huggingface.co/datasets/omarxadel/MaWPS-ar>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Data splits
We provide 2 variants of the dataset. In the first one, the data splits correspond to the original one and can be loaded using:
```python
datasets.load_dataset("MU-NLPC/calc-mawps", "original-splits")
```
The second one is filtered to prevent data leaks (overly similar examples in train and test/val splits) in between and across datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
Specifically, we filtered out around 2,500 near-duplicates from the train set that were similar to some instances in the MAWPS val and test splits and ASDiv-A test split. You can load this variant via:
```python
datasets.load_dataset("MU-NLPC/calc-mawps")
```
## Attributes:
- **id**: id of the example
- **question**: problem description in English
- **question_arabic**: problem description in Arabic
- **chain**: series of simple operations (derived from **expression**) that lead to the solution
- **result**: the solution for x as a number or fraction (string)
- **result_float**: same as `result` but converted to a float
- **equation**: an equation that needs to be solved for `x` to obtain the result. Usually in the form of "x = ..." but not always.
- **expression**: arithmetic expression derived from `equation` that solves it for `x`
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original MAWPS dataset**](http://lang.ee.washington.edu/MAWPS)
- [**MAWPS dataset variant in Arabic**](https://huggingface.co/datasets/omarxadel/MaWPS-ar)
- [**original MAWPS paper**](https://aclanthology.org/N16-1136/)
- [**original MAWPS repo**](https://github.com/sroy9/mawps)
## Licence
MIT, consistent with the original source dataset linked above.
## Cite
If you use this version of the dataset in research, please cite the original [MAWPS paper](https://aclanthology.org/N16-1136/), and [Calc-X paper](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
| 5,878 | [
[
-0.038787841796875,
-0.0316162109375,
0.0167999267578125,
0.005466461181640625,
0.00400543212890625,
-0.007648468017578125,
0.009521484375,
-0.0258026123046875,
0.0255889892578125,
0.0306396484375,
-0.052825927734375,
-0.019622802734375,
-0.05438232421875,
0... |
HydraLM/corpus_1_clustered_formatted | 2023-09-17T22:31:14.000Z | [
"region:us"
] | HydraLM | null | null | 0 | 50 | 2023-09-17T22:24:57 | ---
configs:
- config_name: default
data_files:
- split: '0'
path: data/0-*
- split: '1'
path: data/1-*
- split: '2'
path: data/2-*
- split: '3'
path: data/3-*
- split: '4'
path: data/4-*
- split: '5'
path: data/5-*
- split: '6'
path: data/6-*
- split: '7'
path: data/7-*
- split: '8'
path: data/8-*
- split: '9'
path: data/9-*
- split: '10'
path: data/10-*
- split: '11'
path: data/11-*
- split: '12'
path: data/12-*
- split: '13'
path: data/13-*
- split: '14'
path: data/14-*
- split: '15'
path: data/15-*
- split: '16'
path: data/16-*
- split: '17'
path: data/17-*
- split: '18'
path: data/18-*
- split: '19'
path: data/19-*
- split: '20'
path: data/20-*
- split: '21'
path: data/21-*
- split: '22'
path: data/22-*
- split: '23'
path: data/23-*
- split: '24'
path: data/24-*
- split: '25'
path: data/25-*
- split: '26'
path: data/26-*
- split: '27'
path: data/27-*
- split: '28'
path: data/28-*
- split: '29'
path: data/29-*
- split: '30'
path: data/30-*
- split: '31'
path: data/31-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: '0'
num_bytes: 57988271
num_examples: 45617
- name: '1'
num_bytes: 80924315
num_examples: 57017
- name: '2'
num_bytes: 146972588
num_examples: 59271
- name: '3'
num_bytes: 55446301
num_examples: 41544
- name: '4'
num_bytes: 126072016
num_examples: 72587
- name: '5'
num_bytes: 60462897
num_examples: 34080
- name: '6'
num_bytes: 42695954
num_examples: 30203
- name: '7'
num_bytes: 86334809
num_examples: 36365
- name: '8'
num_bytes: 205182212
num_examples: 82654
- name: '9'
num_bytes: 65097365
num_examples: 34266
- name: '10'
num_bytes: 18143136
num_examples: 22221
- name: '11'
num_bytes: 85400025
num_examples: 43502
- name: '12'
num_bytes: 145547717
num_examples: 90729
- name: '13'
num_bytes: 68582287
num_examples: 77149
- name: '14'
num_bytes: 56976092
num_examples: 53042
- name: '15'
num_bytes: 86545425
num_examples: 49714
- name: '16'
num_bytes: 94867422
num_examples: 51517
- name: '17'
num_bytes: 59847974
num_examples: 39622
- name: '18'
num_bytes: 132858143
num_examples: 54708
- name: '19'
num_bytes: 32550229
num_examples: 21282
- name: '20'
num_bytes: 94382189
num_examples: 42830
- name: '21'
num_bytes: 112712389
num_examples: 41104
- name: '22'
num_bytes: 59089685
num_examples: 42586
- name: '23'
num_bytes: 90127682
num_examples: 35260
- name: '24'
num_bytes: 71313692
num_examples: 45451
- name: '25'
num_bytes: 131908904
num_examples: 55974
- name: '26'
num_bytes: 61742004
num_examples: 60773
- name: '27'
num_bytes: 22254025
num_examples: 29582
- name: '28'
num_bytes: 63023032
num_examples: 47177
- name: '29'
num_bytes: 36460715
num_examples: 32707
- name: '30'
num_bytes: 12331184
num_examples: 15399
- name: '31'
num_bytes: 26522434
num_examples: 26952
download_size: 1331217922
dataset_size: 2490363113
---
# Dataset Card for "corpus_1_clustered_formatted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,541 | [
[
-0.033447265625,
-0.02923583984375,
0.0155029296875,
0.041259765625,
-0.021881103515625,
-0.0007033348083496094,
-0.010498046875,
0.01392364501953125,
0.07177734375,
0.03582763671875,
-0.03887939453125,
-0.08447265625,
-0.045074462890625,
-0.02008056640625,
... |
mmnga/wikipedia-ja-20230720-2k | 2023-09-25T08:20:29.000Z | [
"region:us"
] | mmnga | null | null | 0 | 50 | 2023-09-25T07:51:08 | ---
dataset_info:
features:
- name: curid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5492016.948562663
num_examples: 2048
download_size: 3161030
dataset_size: 5492016.948562663
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia-ja-20230720-2k"
This is data extracted randomly from [izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720), consisting of 2,048 records.
[izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720)からデータを2k分ランダムに抽出したデータです。
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 836 | [
[
-0.030517578125,
-0.037261962890625,
0.019683837890625,
0.01018524169921875,
-0.03778076171875,
-0.020111083984375,
0.00400543212890625,
-0.0379638671875,
0.0474853515625,
0.0264129638671875,
-0.06353759765625,
-0.0396728515625,
-0.041900634765625,
-0.000334... |
bobbybelajar/Llama2SummaryPlusSentiment | 2023-09-30T06:06:11.000Z | [
"region:us"
] | bobbybelajar | null | null | 0 | 50 | 2023-09-30T06:05:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
dmrau/trec_dl19 | 2023-10-09T13:07:39.000Z | [
"region:us"
] | dmrau | null | null | 0 | 50 | 2023-10-06T12:41:13 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 2194
num_examples: 43
- name: corpus
num_bytes: 2181810
num_examples: 5482
download_size: 1207481
dataset_size: 2184004
---
# Dataset Card for "trec_dl19"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 609 | [
[
-0.0465087890625,
-0.0247802734375,
0.0157470703125,
0.017608642578125,
-0.0132598876953125,
0.0231781005859375,
0.0282440185546875,
-0.0180511474609375,
0.056396484375,
0.034698486328125,
-0.0771484375,
-0.06390380859375,
-0.034912109375,
-0.009025573730468... |
augustoperes/mtg_text | 2023-10-18T14:34:55.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | augustoperes | null | null | 0 | 50 | 2023-10-09T16:02:55 | ---
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
# Magic the gathering dataset
This dataset contains text of all magic the gathering cards.
Example usage:
```python
from datasets import load_dataset
dataset = load_dataset('augustoperes/mtg_text')
dataset
# outputs:
# DatasetDict({
# train: Dataset({
# features: ['card_name', 'type_line', 'oracle_text'],
# num_rows: 20063
# })
# validation: Dataset({
# features: ['card_name', 'type_line', 'oracle_text'],
# num_rows: 5016
# })
# })
```
Elements of the dataset are, for example:
```python
train_dataset = dataset['train']
train_dataset[0]
# Outputs
# {'card_name': 'Recurring Insight',
# 'type_line': 'Sorcery',
# 'oracle_text': "Draw cards equal to the number of cards in target opponent's hand.\nRebound (If you cast this spell from your hand, exile it as it resolves. At the beginning of your next upkeep, you may cast this card from exile without paying its mana cost.)"}
```
# Example usage with Pytorch
You can easily tokenize, convert and pad this dataset to be usable in pytorch with:
```python
from transformers import AutoTokenizer
import torch
from torch.nn.utils.rnn import pad_sequence
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize(sample):
sample["card_name"] = tokenizer(sample["card_name"])["input_ids"]
sample["type_line"] = tokenizer(sample["type_line"])["input_ids"]
sample["oracle_text"] = tokenizer(sample["oracle_text"])["input_ids"]
return sample
tokenized_dataset = train_dataset.map(tokenize)
def collate_fn(sequences):
# Pad the sequences to the maximum length in the batch
card_names = [torch.tensor(sequence['card_name']) for sequence in sequences]
type_line = [torch.tensor(sequence['type_line']) for sequence in sequences]
oracle_text = [torch.tensor(sequence['oracle_text']) for sequence in sequences]
padded_card_name = pad_sequence(card_names, batch_first=True, padding_value=0)
padded_type_line = pad_sequence(type_line, batch_first=True, padding_value=0)
padded_oracle_text = pad_sequence(oracle_text, batch_first=True, padding_value=0)
return {'card_name': padded_card_name, 'type_line': padded_type_line, 'padded_oracle_text': padded_oracle_text}
loader = torch.utils.data.DataLoader(tokenized_dataset, collate_fn=collate_fn, batch_size=4)
for e in loader:
print(e)
break
# Will output:
# {'card_name': tensor([[ 101, 10694, 12369, 102, 0],
# [ 101, 3704, 9881, 102, 0],
# [ 101, 22639, 20066, 7347, 102],
# [ 101, 25697, 1997, 6019, 102]]),
# 'type_line': tensor([[ 101, 2061, 19170, 2854, 102, 0, 0],
# [ 101, 6492, 1517, 4743, 102, 0, 0],
# [ 101, 6492, 1517, 22639, 102, 0, 0],
# [ 101, 4372, 14856, 21181, 1517, 15240, 102]]),
# 'padded_oracle_text': [ommited for readability])}
``` | 2,824 | [
[
-0.024688720703125,
-0.0472412109375,
-0.003879547119140625,
0.0071868896484375,
-0.0296630859375,
-0.0124053955078125,
-0.0126495361328125,
-0.00438690185546875,
0.036407470703125,
0.0287017822265625,
-0.034637451171875,
-0.053924560546875,
-0.04022216796875,
... |
hdparmar/itt_specdata | 2023-10-15T02:12:42.000Z | [
"task_categories:text-to-image",
"task_categories:text-to-audio",
"license:apache-2.0",
"region:us"
] | hdparmar | null | null | 0 | 50 | 2023-10-11T21:00:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6043862350.049
num_examples: 51217
download_size: 6011357718
dataset_size: 6043862350.049
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-to-image
- text-to-audio
pretty_name: Data Irish Traditional Tunes (Spectrogram-Text)
---
# Dataset Card for "itt_specdata"
## 1. Dataset Description
Dataset is used for the following project
- **Homepage:** [Trad-fusion](https://github.com/hdparmar/Tradi-fusion)
### 1.1 Dataset Summary
This dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 1 channel.
This 1 channel, you can use to fine-tune or train different models, example: Diffusion Model but since diffusion model takes 3 channel, I have other dataset irish-traditional-tunes for that purpose.
This 1 channel, gives a way to experiment and add creativity to other 2 channels, for example, 2nd channel can be delta, and 3rd can be delta-delta of the 1st channel mel-spectrogram.
The primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.
### 1.2 Languages
The dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.
## 2. Dataset Structure
### 2.1 Data Instances
Each data instance in this dataset is composed of two main elements: an image and a text caption.
The image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.
#### Example:
The metadata.csv file the dataset is in this format
```
{"file_name": "path/to/the/image.png",
"text": "Irish Traditional Tune"}
```
### 2.2 Data Fields
- **file_name**: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.
- **text**: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is "Irish Traditional Tune."
### 2.3 Data Splits
As of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.
### 2.4 Uniform Captions: A Special Note
All the spectrograms in this dataset come labeled with a uniform caption: "Irish Traditional Tune." This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.
## NOTE
Furthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset | 2,919 | [
[
-0.042388916015625,
-0.0010614395141601562,
-0.00014388561248779297,
0.0157470703125,
-0.046356201171875,
0.01485443115234375,
-0.030242919921875,
-0.032928466796875,
0.04595947265625,
0.059722900390625,
-0.04052734375,
-0.07830810546875,
-0.017669677734375,
... |
chirunder/MixAtis_for_DecoderOnly_90-10_split | 2023-10-18T06:10:23.000Z | [
"region:us"
] | chirunder | null | null | 0 | 50 | 2023-10-17T15:01:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13373152.39074139
num_examples: 18002
- name: test
num_bytes: 1486483.6092586112
num_examples: 2001
download_size: 3742589
dataset_size: 14859636.0
---
# Dataset Card for "MixAtis_for_DecoderOnly_90-10_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 661 | [
[
-0.047515869140625,
-0.0118560791015625,
-0.00415802001953125,
0.0382080078125,
-0.0243682861328125,
0.0014600753784179688,
0.0034503936767578125,
-0.01091766357421875,
0.0760498046875,
0.0266876220703125,
-0.064697265625,
-0.035125732421875,
-0.047698974609375,... |
KonstantyM/science_qa_prep | 2023-10-20T21:48:48.000Z | [
"region:us"
] | KonstantyM | null | null | 0 | 50 | 2023-10-20T21:38:48 | ---
dataset_info:
features:
- name: input
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 7447742737
num_examples: 4281664
download_size: 4325444802
dataset_size: 7447742737
---
# Dataset Card for "science_qa_prep"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 404 | [
[
-0.03948974609375,
-0.01367950439453125,
0.034637451171875,
0.0155792236328125,
-0.0195770263671875,
0.0113525390625,
0.037139892578125,
0.0017385482788085938,
0.06927490234375,
0.01232147216796875,
-0.052520751953125,
-0.05279541015625,
-0.0182952880859375,
... |
hlhdatscience/guanaco-spanish-dataset | 2023-10-21T11:19:21.000Z | [
"language:es",
"license:apache-2.0",
"region:us"
] | hlhdatscience | null | null | 0 | 50 | 2023-10-21T10:53:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4384495
num_examples: 2410
- name: test
num_bytes: 376933
num_examples: 223
download_size: 2455040
dataset_size: 4761428
license: apache-2.0
language:
- es
pretty_name: d
---
# Dataset Card for "guanaco-spanish-dataset"
This dataset is a subset of original timdettmers/openassistant-guanaco,which is also a subset of the Open Assistant dataset .You can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 2,633 samples, translated with the help of GPT 3.5. turbo.
It represents the 41% and 42% of train and test from timdettmers/openassistant-guanaco respectively.
You can find the github repository for the code used here: https://github.com/Hector1993prog/guanaco_translation
For further information, please see the original dataset.
License: Apache 2.0
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,269 | [
[
-0.0207061767578125,
-0.049774169921875,
0.01331329345703125,
0.031280517578125,
-0.0192718505859375,
0.0010280609130859375,
-0.01190185546875,
-0.03131103515625,
0.034393310546875,
0.0247039794921875,
-0.06024169921875,
-0.05938720703125,
-0.04742431640625,
... |
Lornng/cpgQA-textcol-splitted | 2023-10-24T10:33:46.000Z | [
"region:us"
] | Lornng | null | null | 0 | 50 | 2023-10-23T17:19:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rjaiswal/friends-dataset | 2023-10-25T16:32:38.000Z | [
"region:us"
] | rjaiswal | null | null | 0 | 50 | 2023-10-25T09:35:10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 217527103.0
num_examples: 30
download_size: 217511845
dataset_size: 217527103.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "friends-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 484 | [
[
-0.049530029296875,
-0.01424407958984375,
0.0219573974609375,
0.024627685546875,
-0.007633209228515625,
-0.0023193359375,
0.020904541015625,
-0.019012451171875,
0.07818603515625,
0.023284912109375,
-0.0693359375,
-0.0416259765625,
-0.037506103515625,
-0.0065... |
fuliucansheng/pascal_voc | 2022-01-31T14:54:11.000Z | [
"region:us"
] | fuliucansheng | PASCAL_VOC | PASCAL_VOC | 0 | 49 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
persiannlp/parsinlu_sentiment | 2022-10-22T15:13:40.000Z | [
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|translated|mnli",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:2012.06154",
"region:us"
] | persiannlp | A Persian sentiment analysis task (deciding whether a given sentence contains a particular sentiment). | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | 4 | 49 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|translated|mnli
task_categories:
- sentiment-analysis
task_ids:
- sentiment-analysis
---
# Dataset Card for PersiNLU (Textual Entailment)
## Table of Contents
- [Dataset Card for PersiNLU (Sentiment Analysis)](#dataset-card-for-persi_sentiment)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian sentiment analysis dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"review": "خوب بود ولی خیلی گرون شده دیگه...فک نکنم به این قیمت ارزش خرید داشته باشد",
"review_id": "1538",
"example_id": "4",
"excel_id": "food_194",
"question": "نظر شما در مورد بسته بندی و نگهداری این حلوا شکری، ارده و کنجد چیست؟",
"category": "حلوا شکری، ارده و کنجد",
"aspect": "بسته بندی",
"label": "-3",
"guid": "food-dev-r1538-e4"
}
```
### Data Fields
- `review`: the review text.
- `review_id`: a unique id associated with the review.
- `example_id`: a unique id associated with a particular attribute being addressed about the review.
- `question`: a natural language question about a particular attribute.
- `category`: the subject discussed in the review.
- `aspect`: the aspect mentioned in the input question.
- `label`: the overall sentiment towards this particular subject, in the context of the mentioned aspect. Here are the definition of the labels:
```
'-3': 'no sentiment expressed',
'-2': 'very negative',
'-1': 'negative',
'0': 'neutral',
'1': 'positive',
'2': 'very positive',
'3': 'mixed',
```
### Data Splits
See the data.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
| 4,906 | [
[
-0.047515869140625,
-0.05645751953125,
0.015380859375,
0.0240936279296875,
-0.02288818359375,
0.002620697021484375,
-0.034515380859375,
-0.006420135498046875,
0.03509521484375,
0.0321044921875,
-0.051788330078125,
-0.08013916015625,
-0.0406494140625,
0.02671... |
patriziobellan/PET | 2023-07-05T14:03:24.000Z | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:en",
"license:mit",
"Business Process Management",
"NLP",
"ML",
"DL",
"arxiv:2203.04860",
"region:us"
] | patriziobellan | Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, gateways, actors and flow information. We present our new resource, including a detailed overview of the annotation schema and guidelines, as well as a variety of baselines to benchmark the difficulty and challenges of business process extraction from text. | @inproceedings{DBLP:conf/bpm/BellanADGP22,
author = {Patrizio Bellan and
Han van der Aa and
Mauro Dragoni and
Chiara Ghidini and
Simone Paolo Ponzetto},
editor = {Cristina Cabanillas and
Niels Frederik Garmann{-}Johnsen and
Agnes Koschmider},
title = {{PET:} An Annotated Dataset for Process Extraction from Natural Language
Text Tasks},
booktitle = {Business Process Management Workshops - {BPM} 2022 International Workshops,
M{\"{u}}nster, Germany, September 11-16, 2022, Revised Selected
Papers},
series = {Lecture Notes in Business Information Processing},
volume = {460},
pages = {315--321},
publisher = {Springer},
year = {2022},
url = {https://doi.org/10.1007/978-3-031-25383-6\_23},
doi = {10.1007/978-3-031-25383-6\_23},
timestamp = {Tue, 14 Feb 2023 09:47:10 +0100},
biburl = {https://dblp.org/rec/conf/bpm/BellanADGP22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{DBLP:conf/aiia/BellanGDPA22,
author = {Patrizio Bellan and
Chiara Ghidini and
Mauro Dragoni and
Simone Paolo Ponzetto and
Han van der Aa},
editor = {Debora Nozza and
Lucia C. Passaro and
Marco Polignano},
title = {Process Extraction from Natural Language Text: the {PET} Dataset and
Annotation Guidelines},
booktitle = {Proceedings of the Sixth Workshop on Natural Language for Artificial
Intelligence {(NL4AI} 2022) co-located with 21th International Conference
of the Italian Association for Artificial Intelligence (AI*IA 2022),
Udine, November 30th, 2022},
series = {{CEUR} Workshop Proceedings},
volume = {3287},
pages = {177--191},
publisher = {CEUR-WS.org},
year = {2022},
url = {https://ceur-ws.org/Vol-3287/paper18.pdf},
timestamp = {Fri, 10 Mar 2023 16:23:01 +0100},
biburl = {https://dblp.org/rec/conf/aiia/BellanGDPA22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 6 | 49 | 2022-04-14T09:35:11 | ---
license: mit
task_categories:
- token-classification
language:
- en
tags:
- Business Process Management
- NLP
- ML
- DL
pretty_name: PET
size_categories:
- n<1K
---
# PET: A NEW DATASET FOR PROCESS EXTRACTION FROM TEXT
# Dataset Card for PET
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Annotation Guidelines](#annotationguidelines)
- [Update](#updates)
- [Loading data](#loadingdata)
## Dataset Description
- **Homepage:** https://pdi.fbk.eu/pet-dataset/
- **Paper:** https://arxiv.org/abs/2203.04860
- **Point of Contact:** [Patrizio Bellan](pbellan@fbk.eu)
### Dataset Summary
Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, actors, activity data, gateways and their conditions. We present our new resource to benchmark the difficulty and challenges of business process extraction from text.
### Supported Tasks and Leaderboards
- Token Classification
- Named Entity Recognition
- Relations Extraction
### Languages
English
## Dataset Structure
Test set to beanchmark *Business Process Extraction from Text* approaches.
### Data Instances
#### Token Classification
For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, an integer representing the number of the sentence, a list of tokens representing the words of the sentence instance, and a list of *ner tags* (in IOB2 format) representing the annotation of process elements of the sentence.
Below, an example of data instance.
```
{
"document name":"doc-1.1",
"sentence-ID":1,
"tokens":["Whenever","the","sales","department","receives","an","order",",","a","new","process","instance","is","created","."],
"ner-tags":["O","B-Actor","I-Actor","I-Actor","B-Activity","B-Activity Data","I-Activity Data","O","O","O","O","O","O","O","O"]
}
```
#### Relations Extraction
For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, a list of tokens representing the words of the document instance, a list of interger representing the words position within each sentence of the document instance, a list of *ner tags* (in IOB2 format) representing the annotation of the token, a list of sentence id representing for each token the number of the sentence, and a list of relations of the document.
Below, an example of data instance.
```
{
"document name": "doc-1.1",
"tokens": ["A", "small", "company",...],
"tokens-IDs": [0, 1, 2, ...],
"ner_tags": ["O", "O", "O", ...],
"sentence-IDs": [0, 0, 0, ...],
"relations": {
"source-head-sentence-ID": [1, 1, 1, ...],
"source-head-word-ID": [4, 4, 4, ...],
"relation-type": ["uses", "flow", "actor recipient", ...],
"target-head-sentence-ID": [1, 2, 1,...],
"target-head-word-ID": [5, 9, 1, ...]
}
}
```
### Data Fields
#### Token Classification
- *document name*: a string used to represent the name of the document.
- *sentence-ID*: an integer (starting from 0) representing the number of the sentence within the document.
- *tokens*: a list of string representing the words of the sentence
- *ner-tags*: a list of string representing the annotation for each word.
The allowed **ner-tags** are:
- **O**: An O tag indicates that a token belongs to no chunk.
- **B-Actor**: This tag indicates the beginning of an *Actor* chunk.
- **I-Actor**: This tag indicates that the tag is inside an *Actor* chunk.
- **B-Activity**: This tag indicates the beginning of an *Activity* chunk.
- **I-Activity**: This tag indicates that the tag is inside an *Activity* chunk.
- **B-Activity Data**: This tag indicates the beginning of an *Activity Data* chunk.
- **I-Activity Data**: This tag indicates that the tag is inside an *Activity Data* chunk.
- **B-Further Specification**: This tag indicates the beginning of a *Further Specification* chunk.
- **I-Further Specification**: This tag indicates that the tag is inside a *Further Specification* chunk.
- **B-XOR Gateway**: This tag indicates the beginning of a *XOR Gateway* chunk.
- **I-XOR Gateway**: This tag indicates that the tag is inside a *XOR Gateway* chunk.
- **B-Condition Specification**: This tag indicates the beginning of a *Condition Specification* chunk.
- **I-Condition Specification**: This tag indicates that the tag is inside a *Condition Specification* chunk.
- **B-AND Gateway**: This tag indicates the beginning of an *AND Gateway* chunk.
- **I-AND Gateway**: This tag indicates that the tag is inside an *AND Gateway* chunk.
To have a complete explanation of each process element tag please refer to the [research paper](https://arxiv.org/abs/2203.04860) and the [annotation guidelines](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf).
### Relations Extraction
- *document name*: a string used to represent the name of the document.
- *tokens*: a list of string representing the words of the document
- *tokens-IDs*: a list of interger representing the word position within a sentence.
- *ner_tags*: a list of string representing the annotation for each word. (see ner-tags above)
- *sentence-IDs*: a list of interger representing the sentence number for each word of the document.
- *relations*:: a list of document relations.
- *source-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the source entity.
- *source-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the source entity.
- *relation-type*: a list of relation tags.
- *target-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the target entity.
- *target-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the target entity.
For instance, a relation is defined by the instances of *source-head-sentence-ID*, *source-head-word-ID*, *relation-type*, *target-head-sentence-ID*, and *target-head-word-ID* at the same index position.
In the following example, the first relation of the first document is shown:
```python
document_1=modelhub_dataset['test'][0]
relation = {
'source-head-sentence-ID': document_1['relations']['source-head-sentence-ID'][0],
'source-head-word-ID': document_1['relations']['source-head-word-ID'][0],
'relation-type': document_1['relations']['relation-type'][0],
'target-head-sentence-ID': document_1['relations']['target-head-sentence-ID'][0],
'target-head-word-ID': document_1['relations']['target-head-sentence-ID'][0],
}
print(relation)
```
the output is:
```python
{'relation-type': 'uses',
'source-head-sentence-ID': 1,
'source-head-word-ID': 4,
'target-head-sentence-ID': 1,
'target-head-word-ID': 1}
```
That means:
the entity in sentence number *1*, starting at the token position *4* has a *uses* relation with the entity in sentence number *1* starting at token position *1*
### Data Splits
The data was not split. It contains the test set only.
## Dataset Creation
### Curation Rationale
Although there is a long tradition of work in NLP on extracting entities and relations from text to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management.
### Source Data
#### Initial Data Collection and Normalization
The dataset construction process has been split in five main phases:
1. Text pre-processing. As the first operation, we check the content of each document and we tokenized it. This initial check was necessary since some of the original texts were automatically translated into English by the authors of the dataset. The translations were never validated, indeed, several errors have been found and fixed.
2. Text Annotation. Each text has been annotated by using the [guidelines](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf). The team was composed by five annotators with high expertise in BPMN. Each document has been assigned to three experts that were in change of identifying all the elements and flows with each document. In this phase, we used the the Inception tool to support annotators.
3. Automatic annotation fixing. After the second phase, we ran an automatic procedure relying on a rule-based script to automatically fix annotations that were not compliant with the guidelines. For example, if a modal verb was erroneously included in the annotation of an Activity, the procedure removed it from the annotation. Another example is the missing of the article within an annotation related to an Actor. In this case, the script included it in the annotation. This phase allowed to remove possible annotation errors and to obtain annotations compliant with the guidelines.
4. Agreement Computation. Here, we computed, on the annotation provided by the experts, the agreement scores for each process element and for each relation between process elements pair adopting the methodology proposed in [Hripcsak *et al.*](https://academic.oup.com/jamia/article/12/3/296/812057?login=true). We measured the agreement in terms of the F1 measure because, besides being straightforward to calculate, it is directly interpretable. Note that chance-corrected measures like *k* approach the F1-measure as the number of cases that raters agree are negative grows. By following such a methodology, an annotation was considered in agreement among the experts if and only if they capture the same span of words and they assign the same process element tag to the annotation.
5. Reconciliation. The last phase consisted of the mitigation of disagreements within the annotations provided by the experts. The aim of this phase is to obtain a shared and agreed set of gold standard annotations on each text for both entities and relations. Such entities also enable the generation of the related full-connected process model flow that can be rendered by using, but not limited to, a BPMN diagram. During this last phase, among the 47 documents originally included into the dataset, 2 of them were discarded. These texts were not fully annotated by the annotators since they were not be able to completely understand which process elements were actually included in some specific parts of the text. For this reason, the final size of the dataset is 45 textual descriptions of the corresponding process models together with their annotations.
#### Who are the source language producers?
English
### Annotations
#### Annotation process
You can read about the annotation process in the original paper https://arxiv.org/abs/2203.04860
#### Who are the annotators?
Expert Annotators
### Personal and Sensitive Information
No personal or sensitive information issues.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset has no social impact
### Discussion of Biases
No bias found in the dataset
### Other Known Limitations
The *Further specification* and *AND Gateway* elements obtained very poor performance on the baselines proposed in the paper.
The *AND Gateway* is the less represented process elements in this dataset.
The *Further Specification* process element was the most difficult element to annotate.
## Additional Information
### Dataset Curators
- Patrizio Bellan (Fondazione Bruno Kessler, Trento, Italy and Free University of Bozen-Bolzano, Bolzano, Italy)
- Mauro Dragoni (Fondazione Bruno Kessler, Trento, Italy)
- Chiara Ghidini (Fondazione Bruno Kessler, Trento, Italy)
- Han van der Aa (University of Mannheim, Mannheim, Germany)
- Simone Ponzetto (University of Mannheim, Mannheim, Germany)
### Licensing Information
### Citation Information
```
@inproceedings{DBLP:conf/aiia/BellanGDPA22,
author = {Patrizio Bellan and
Chiara Ghidini and
Mauro Dragoni and
Simone Paolo Ponzetto and
Han van der Aa},
editor = {Debora Nozza and
Lucia C. Passaro and
Marco Polignano},
title = {Process Extraction from Natural Language Text: the {PET} Dataset and
Annotation Guidelines},
booktitle = {Proceedings of the Sixth Workshop on Natural Language for Artificial
Intelligence {(NL4AI} 2022) co-located with 21th International Conference
of the Italian Association for Artificial Intelligence (AI*IA 2022),
Udine, November 30th, 2022},
series = {{CEUR} Workshop Proceedings},
volume = {3287},
pages = {177--191},
publisher = {CEUR-WS.org},
year = {2022},
url = {https://ceur-ws.org/Vol-3287/paper18.pdf},
timestamp = {Fri, 10 Mar 2023 16:23:01 +0100},
biburl = {https://dblp.org/rec/conf/aiia/BellanGDPA22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{DBLP:conf/bpm/BellanADGP22,
author = {Patrizio Bellan and
Han van der Aa and
Mauro Dragoni and
Chiara Ghidini and
Simone Paolo Ponzetto},
editor = {Cristina Cabanillas and
Niels Frederik Garmann{-}Johnsen and
Agnes Koschmider},
title = {{PET:} An Annotated Dataset for Process Extraction from Natural Language
Text Tasks},
booktitle = {Business Process Management Workshops - {BPM} 2022 International Workshops,
M{\"{u}}nster, Germany, September 11-16, 2022, Revised Selected
Papers},
series = {Lecture Notes in Business Information Processing},
volume = {460},
pages = {315--321},
publisher = {Springer},
year = {2022},
url = {https://doi.org/10.1007/978-3-031-25383-6\_23},
doi = {10.1007/978-3-031-25383-6\_23},
timestamp = {Tue, 14 Feb 2023 09:47:10 +0100},
biburl = {https://dblp.org/rec/conf/bpm/BellanADGP22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [Patrizio Bellan](https://pdi.fbk.eu/bellan/) for adding this dataset.
#### <a name="updates"></a>Update
- v1.0.0: Added token classification task
- v1.0.1: Added extraction relation task
- v1.1.0: Fixed minor errors, fixed performs relations
Version 1.1.0 cab be found [here](https://huggingface.co/datasets/patriziobellan/PETv11)
## <a name="annotationguidelines"></a>Annotation Guidelines
### Inception Schema
The inception schema can be found [here](https://pdi.fbk.eu/pet/inception-schema.json)
### Annotation Guidelines
The Annotation guidelines and procedures adopted to annotate the PET dataset can be downloaded [here](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf)
### Article
The article can be downloaded [here]({https://ceur-ws.org/Vol-3287/paper18.pdf})
### Python Interface
A Python interface (beta version) to interact with the dataset can be found [here](https://pypi.org/project/petdatasetreader/)
You can find the **BASELINES**, the annotation data, and a graphical interface to visualize predictions [here](https://github.com/patriziobellan86/PETbaselines)
### Benchmarks
A Python benchmarking procedure package to test approaches on the PET dataset ca be found [here](https://pypi.org/project/petbenchmarks/)
## <a name="loadingdata"></a>Loading data
### Token-classification task
```python
from datasets import load_dataset
modelhub_dataset = load_dataset("patriziobellan/PET", name='token-classification')
```
### Relations-extraction task
```python
from datasets import load_dataset
modelhub_dataset = load_dataset("patriziobellan/PET", name='relations-extraction')
```
| 17,739 | [
[
-0.027435302734375,
-0.06170654296875,
0.032745361328125,
0.01129150390625,
-0.018035888671875,
-0.014984130859375,
-0.0178375244140625,
-0.027587890625,
0.0152130126953125,
0.054901123046875,
-0.059417724609375,
-0.052276611328125,
-0.050750732421875,
0.014... |
juletxara/tydiqa_xtreme | 2022-07-01T19:19:05.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:extended|wikipedia",
"language:en",
"language:ar",
"language:bn",
"language:fi",
"l... | juletxara | TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
the use of translation (unlike MLQA and XQuAD).
We also include "translate-train" and "translate-test" splits for each non-English languages from XTREME (Hu et al., 2020). These splits are the automatic translations from English to each target language used in the XTREME paper [https://arxiv.org/abs/2003.11080]. The "translate-train" split purposefully ignores the non-English TyDiQA-GoldP training data to simulate the transfer learning scenario where original-language data is not available and system builders must rely on labeled English data plus existing machine translation systems. | @article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of the Association for Computational Linguistics}
} | 1 | 49 | 2022-06-08T10:42:42 | ---
pretty_name: TyDi QA
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
- ar
- bn
- fi
- id
- ja
- sw
- ko
- ru
- te
- th
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: tydi-qa
---
# Dataset Card for "tydiqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3726.74 MB
- **Size of the generated dataset:** 5812.92 MB
- **Total amount of disk used:** 9539.67 MB
### Dataset Summary
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
the use of translation (unlike MLQA and XQuAD).
We also include "translate-train" and "translate-test" splits for each non-English languages from XTREME (Hu et al., 2020). These splits are the automatic translations from English to each target language used in the XTREME paper [https://arxiv.org/abs/2003.11080]. The "translate-train" split purposefully ignores the non-English TyDiQA-GoldP training data to simulate the transfer learning scenario where original-language data is not available and system builders must rely on labeled English data plus existing machine translation systems.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### primary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 5757.59 MB
- **Total amount of disk used:** 7620.96 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"annotations": {
"minimal_answers_end_byte": [-1, -1, -1],
"minimal_answers_start_byte": [-1, -1, -1],
"passage_answer_candidate_index": [-1, -1, -1],
"yes_no_answer": ["NONE", "NONE", "NONE"]
},
"document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
"document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
"document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
"language": "thai",
"passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
"question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
}
```
#### secondary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 55.34 MB
- **Total amount of disk used:** 1918.71 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [394],
"text": ["بطولتين"]
},
"context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...",
"id": "arabic-2387335860751143628-1",
"question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...",
"title": "قائمة نهائيات كأس العالم"
}
```
### Data Fields
The data fields are the same among all splits.
#### primary_task
- `passage_answer_candidates`: a dictionary feature containing:
- `plaintext_start_byte`: a `int32` feature.
- `plaintext_end_byte`: a `int32` feature.
- `question_text`: a `string` feature.
- `document_title`: a `string` feature.
- `language`: a `string` feature.
- `annotations`: a dictionary feature containing:
- `passage_answer_candidate_index`: a `int32` feature.
- `minimal_answers_start_byte`: a `int32` feature.
- `minimal_answers_end_byte`: a `int32` feature.
- `yes_no_answer`: a `string` feature.
- `document_plaintext`: a `string` feature.
- `document_url`: a `string` feature.
#### secondary_task
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train | validation |
| -------------- | -----: | ---------: |
| primary_task | 166916 | 18670 |
| secondary_task | 49881 | 5077 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of the Association for Computational Linguistics}
}
```
```
@inproceedings{ruder-etal-2021-xtreme,
title = "{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation",
author = "Ruder, Sebastian and
Constant, Noah and
Botha, Jan and
Siddhant, Aditya and
Firat, Orhan and
Fu, Jinlan and
Liu, Pengfei and
Hu, Junjie and
Garrette, Dan and
Neubig, Graham and
Johnson, Melvin",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.802",
doi = "10.18653/v1/2021.emnlp-main.802",
pages = "10215--10245",
}
}
```
| 10,022 | [
[
-0.051025390625,
-0.049530029296875,
0.01971435546875,
0.00598907470703125,
-0.015594482421875,
0.0019550323486328125,
-0.025238037109375,
-0.0229949951171875,
0.043426513671875,
0.033905029296875,
-0.05755615234375,
-0.06268310546875,
-0.042083740234375,
0.... |
andreagasparini/librispeech_test_only | 2022-07-06T17:26:04.000Z | [
"region:us"
] | andreagasparini | LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
audiobooks from the LibriVox project, and has been carefully segmented and aligned.87 | @inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
} | 1 | 49 | 2022-07-06T17:13:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
CarperAI/pile-v2-small-filtered | 2022-12-06T14:16:11.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:en",
"language:code",
"region:us"
] | CarperAI | null | null | 8 | 49 | 2022-12-06T06:08:44 | ---
annotations_creators: []
language_creators:
- crowdsourced
language: ["en","code"]
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
## Dataset Description
A small subset in each dataset of `pile-v2`(~1000 samples) of [pile-v2]() dataset, each has 1,000 random samples from the original dataset. The dataset has 255MB of text (code and english).
## Languages
The dataset contains technical text on programming languages and natural language with the following subsets,
- Bible
- TED2020
- PileOfLaw
- StackExchange
- GithubIssues
- Opensubtitles
- USPTO
- S2ORC
- DevDocs
- CodePileReddit2022
- USENET
- GNOME
- ASFPublicMail
- PileV2Reddit2020
- CodePilePosts
- Discourse
- Tanzil
- arXiv
- UbuntuIRC
- PubMed
- CodePileReddit2020
- CodePileReddit2021
- GlobalVoices
- FreeLaw_Options
- PileV2Posts
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("CarperAI/pile-v2-small")
```
### How to use it
You can either load the whole dataset like above, or load a specific subset such as arxiv by specifying the folder directory:
```python
load_dataset("CarperAI/pile-v2-small", data_dir="data/arxiv")
```
| 1,239 | [
[
-0.036376953125,
-0.031524658203125,
-0.0053253173828125,
0.0171661376953125,
-0.029754638671875,
-0.00705718994140625,
0.003612518310546875,
-0.018829345703125,
0.0139007568359375,
0.070556640625,
-0.0214080810546875,
-0.031890869140625,
-0.031585693359375,
... |
oaimli/PeerSum | 2023-10-08T05:31:38.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"arxiv:2305.01498",
"region:us"
] | oaimli | null | null | 1 | 49 | 2023-05-03T00:12:05 | ---
license: apache-2.0
task_categories:
- summarization
language:
- en
pretty_name: PeerSum
size_categories:
- 10K<n<100K
---
This is PeerSum, a multi-document summarization dataset in the peer-review domain. More details can be found in the paper accepted at EMNLP 2023, [Summarizing Multiple Documents with Conversational Structure for Meta-review Generation](https://arxiv.org/abs/2305.01498). The original code and datasets are public on [GitHub](https://github.com/oaimli/PeerSum).
Please use the following code to download the dataset with the datasets library from Huggingface.
```python
from datasets import load_dataset
peersum_all = load_dataset('oaimli/PeerSum', split='all')
peersum_train = peersum_all.filter(lambda s: s['label'] == 'train')
peersum_val = peersum_all.filter(lambda s: s['label'] == 'val')
peersum_test = peersum_all.filter(lambda s: s['label'] == 'test')
```
The Huggingface dataset is mainly for multi-document summarization. Each sample comprises information with the following keys:
```
* paper_id: str (a link to the raw data)
* paper_title: str
* paper_abstract, str
* paper_acceptance, str
* meta_review, str
* review_ids, list(str)
* review_writers, list(str)
* review_contents, list(str)
* review_ratings, list(int)
* review_confidences, list(int)
* review_reply_tos, list(str)
* label, str, (train, val, test)
```
You can also download the raw data from [Google Drive](https://drive.google.com/drive/folders/1SGYvxY1vOZF2MpDn3B-apdWHCIfpN2uB?usp=sharing). The raw data comprises more information and it can be used for other analysis for peer reviews. | 1,595 | [
[
-0.035125732421875,
-0.04217529296875,
0.0215911865234375,
0.0245513916015625,
-0.00141143798828125,
-0.0115509033203125,
0.00662994384765625,
-0.0018243789672851562,
0.042266845703125,
0.0284271240234375,
-0.0538330078125,
-0.04486083984375,
-0.0452880859375,
... |
sharmaarushi17/HPCPerfOpt-MCQA | 2023-11-01T22:51:39.000Z | [
"license:cc",
"region:us"
] | sharmaarushi17 | null | null | 0 | 49 | 2023-07-03T19:54:08 | ---
license: cc
pretty_name: HPCPerfOpt (HPC Performance Optimization Benchmark) # Example: SQuAD
# configs:
# - mcq-single
# - mcq-multiple
# - rodinia-chatgpt-mcq
# dataset_info:
# # features:
# # - name: {feature_name_0} # Example: id
# # dtype: {feature_dtype_0} # Example: int32
# # - name: {feature_name_1} # Example: text
# # dtype: {feature_dtype_1} # Example: string
# # - name: {feature_name_2} # Example: image
# # dtype: {feature_dtype_2} # Example: image
# # Example for SQuAD:
# # - name: id
# # dtype: string
# # - name: title
# # dtype: string
# # - name: context
# # dtype: string
# # - name: question
# # dtype: string
# # - name: answers
# # sequence:
# # - name: text
# # dtype: string
# # - name: answer_start
# # dtype: int32
# config_name: mcq-single # Example for glue: sst2
# splits:
# - name: test # Example: train
# # num_bytes: {split_num_bytes_0} # Example for SQuAD: 79317110
# # num_examples: {split_num_examples_0} # Example for SQuAD: 87599
# # download_size: {dataset_download_size} # Example for SQuAD: 35142551
# # dataset_size: {dataset_size} # Example for SQuAD: 89789763
# # - config_name: mcq-multiple
# # data_files:
# # - split: test
# # path: "mcq-multiple.csv"
# # - config_name: rodinia-chatgpt
# # data_files:
# # - split: test
# # path: "rodinia-chatgpt-mcq.csv"
# task_categories:
# - question-answering
# tags:
# - code
# size_categories:
# - n<1K
---
This dataset contains Multiple Choice question-answer pairs.
There are 3 test files separated on the basis of how they were created:
test1.csv manual data collection from tutorials, etc
test2.csv scraped profiling tool Codee documentation
test3.csv ChatGPT-generated-MCQ (need to update format and randomize answers.) | 1,958 | [
[
-0.045135498046875,
-0.06719970703125,
-0.0012035369873046875,
0.00782012939453125,
0.003246307373046875,
0.01534271240234375,
0.00890350341796875,
0.0033416748046875,
0.02093505859375,
0.07037353515625,
-0.05731201171875,
-0.0133056640625,
0.0012102127075195312... |
nlplabtdtu/health_qa | 2023-08-04T03:52:32.000Z | [
"region:us"
] | nlplabtdtu | null | null | 1 | 49 | 2023-08-04T03:48:52 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.057098388671875,
0.028839111328125,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005046844482421875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01497650146484375,
-0.06036376953125,
0... |
thr10/code-ins-python-mix-100k-v1 | 2023-08-31T14:07:20.000Z | [
"region:us"
] | thr10 | null | null | 1 | 49 | 2023-08-31T14:07:11 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 227488110.0
num_examples: 163747
download_size: 107640410
dataset_size: 227488110.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "code-ins-python-mix-100k-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 467 | [
[
-0.0433349609375,
-0.004810333251953125,
-0.00841522216796875,
0.047943115234375,
-0.003986358642578125,
0.0023212432861328125,
0.0198822021484375,
-0.006702423095703125,
0.06396484375,
0.0335693359375,
-0.058441162109375,
-0.040496826171875,
-0.02777099609375,
... |
yzhuang/autotree_automl_100000_credit_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T19:45:41.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 49 | 2023-09-07T19:45:08 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 725608022
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_credit_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 848 | [
[
-0.0227203369140625,
-0.01427459716796875,
0.01995849609375,
0.023193359375,
-0.012939453125,
0.0157470703125,
0.044158935546875,
-0.0028591156005859375,
0.053680419921875,
0.028564453125,
-0.05609130859375,
-0.043670654296875,
-0.046844482421875,
0.00143337... |
jinaai/big-patent-clustering | 2023-09-26T10:53:01.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | jinaai | null | null | 0 | 49 | 2023-09-13T08:09:11 | ---
license: cc-by-4.0
language:
- en
---
# Big Patent Clustering Dataset
This dataset is created for patent classification. It is derived from the [big patent dataset](https://huggingface.co/datasets/big_patent) but only contains a subset of the test set of the original dataset.
The subsets contain only patents which are assigned to one single category in the original dataset. | 382 | [
[
-0.0249481201171875,
-0.02264404296875,
0.0103912353515625,
0.0274505615234375,
-0.0185089111328125,
0.02545166015625,
0.025909423828125,
-0.019256591796875,
0.04986572265625,
0.045318603515625,
-0.036102294921875,
-0.047271728515625,
-0.0269927978515625,
-0... |
NusaCrowd/nergrit | 2023-09-26T12:35:09.000Z | [
"language:ind",
"license:mit",
"named-entity-recognition",
"region:us"
] | NusaCrowd | Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition (NER), Statement Extraction,
and Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT).
The Named Entity Recognition contains 18 entities as follow:
'CRD': Cardinal
'DAT': Date
'EVT': Event
'FAC': Facility
'GPE': Geopolitical Entity
'LAW': Law Entity (such as Undang-Undang)
'LOC': Location
'MON': Money
'NOR': Political Organization
'ORD': Ordinal
'ORG': Organization
'PER': Person
'PRC': Percent
'PRD': Product
'QTY': Quantity
'REG': Religion
'TIM': Time
'WOA': Work of Art
'LAN': Language | @misc{Fahmi_NERGRIT_CORPUS_2019,
author = {Fahmi, Husni and Wibisono, Yudi and Kusumawati, Riyanti},
title = {{NERGRIT CORPUS}},
url = {https://github.com/grit-id/nergrit-corpus},
year = {2019}
} | 0 | 49 | 2023-09-26T11:18:07 | ---
license: mit
tags:
- named-entity-recognition
language:
- ind
---
# nergrit
Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition (NER), Statement Extraction,
and Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT).
The Named Entity Recognition contains 18 entities as follow:
'CRD': Cardinal
'DAT': Date
'EVT': Event
'FAC': Facility
'GPE': Geopolitical Entity
'LAW': Law Entity (such as Undang-Undang)
'LOC': Location
'MON': Money
'NOR': Political Organization
'ORD': Ordinal
'ORG': Organization
'PER': Person
'PRC': Percent
'PRD': Product
'QTY': Quantity
'REG': Religion
'TIM': Time
'WOA': Work of Art
'LAN': Language
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@misc{Fahmi_NERGRIT_CORPUS_2019,
author = {Fahmi, Husni and Wibisono, Yudi and Kusumawati, Riyanti},
title = {{NERGRIT CORPUS}},
url = {https://github.com/grit-id/nergrit-corpus},
year = {2019}
}
```
## License
MIT
## Homepage
[https://github.com/grit-id/nergrit-corpus](https://github.com/grit-id/nergrit-corpus)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | 1,344 | [
[
-0.0521240234375,
-0.061248779296875,
-0.0146636962890625,
0.019775390625,
-0.0208892822265625,
0.01387786865234375,
-0.0142669677734375,
-0.031707763671875,
0.04730224609375,
0.04400634765625,
-0.01282501220703125,
-0.03509521484375,
-0.04443359375,
0.03460... |
peterschmidt85/samsum | 2023-09-30T17:06:11.000Z | [
"region:us"
] | peterschmidt85 | null | null | 0 | 49 | 2023-09-30T17:05:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10789305
num_examples: 14732
download_size: 5844166
dataset_size: 10789305
---
# Dataset Card for "samsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 437 | [
[
-0.0323486328125,
0.0019063949584960938,
0.0231475830078125,
0.0115203857421875,
-0.0278167724609375,
-0.00848388671875,
0.0209503173828125,
-0.0078277587890625,
0.07647705078125,
0.036346435546875,
-0.0614013671875,
-0.0579833984375,
-0.050994873046875,
-0.... |
Intuit-GenSRF/jigsaw-toxic-comment | 2023-10-04T23:28:45.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 49 | 2023-10-04T23:28:42 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 64586545
num_examples: 159571
download_size: 41105413
dataset_size: 64586545
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jigsaw-toxic-comment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 491 | [
[
-0.024261474609375,
-0.0233306884765625,
0.0183563232421875,
0.01493072509765625,
-0.033660888671875,
0.00148773193359375,
0.0282135009765625,
-0.01428985595703125,
0.054962158203125,
0.030975341796875,
-0.05328369140625,
-0.045684814453125,
-0.04669189453125,
... |
nc33/task1 | 2023-10-09T03:18:30.000Z | [
"region:us"
] | nc33 | null | null | 0 | 49 | 2023-10-09T03:17:28 | ---
dataset_info:
config_name: train
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1734562359
num_examples: 1236607
download_size: 288424748
dataset_size: 1734562359
configs:
- config_name: train
data_files:
- split: train
path: train/train-*
---
# Dataset Card for "task1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 500 | [
[
-0.028167724609375,
-0.0194091796875,
0.0121307373046875,
0.025360107421875,
-0.01287078857421875,
-0.012542724609375,
0.0230255126953125,
-0.008026123046875,
0.056304931640625,
0.038055419921875,
-0.080810546875,
-0.049407958984375,
-0.05755615234375,
-0.02... |
miojizzy/genshin_artifact_recognize_datasets | 2023-10-15T14:03:39.000Z | [
"region:us"
] | miojizzy | Monster Hunter Rise images and labels. | null | 0 | 49 | 2023-10-09T06:14:27 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.028839111328125,
-0.0350341796875,
0.04656982421875,
0.052490234375,
0.00504302978515625,
0.0513916015625,
0.016998291015625,
-0.0521240234375,
-0.0149993896484375,
-0.06036376953125,
0.03790283... |
zhen-dong-nexusflow/multi_cvecpe_apis_nested | 2023-10-27T00:52:47.000Z | [
"region:us"
] | zhen-dong-nexusflow | null | null | 0 | 49 | 2023-10-14T21:00:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
minea/test01 | 2023-10-19T14:54:18.000Z | [
"region:us"
] | minea | null | null | 0 | 49 | 2023-10-19T08:26:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Lajavaness/STS13-fr | 2023-10-19T23:12:40.000Z | [
"region:us"
] | Lajavaness | null | null | 1 | 49 | 2023-10-19T23:12:19 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
norwegian_ner | 2023-01-25T14:41:45.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:no",
"license:unknown",
"region:us"
] | null | Named entities Recognition dataset for Norwegian. It is
a version of the Universal Dependency (UD) Treebank for both Bokmål and Nynorsk (UDN) where
all proper nouns have been tagged with their type according to the NER tagging scheme. UDN is a converted
version of the Norwegian Dependency Treebank into the UD scheme. | @inproceedings{johansen2019ner,
title={Named-Entity Recognition for Norwegian},
author={Johansen, Bjarte},
booktitle={Proceedings of the 22nd Nordic Conference on Computational Linguistics, NoDaLiDa},
year={2019}
} | 0 | 48 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- 'no'
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Norwegian NER
dataset_info:
- config_name: bokmaal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-OTH
'2': I-OTH
'3': E-OTH
'4': S-OTH
'5': B-ORG
'6': I-ORG
'7': E-ORG
'8': S-ORG
'9': B-PRS
'10': I-PRS
'11': E-PRS
'12': S-PRS
'13': B-GEO
'14': I-GEO
'15': E-GEO
'16': S-GEO
splits:
- name: train
num_bytes: 9859760
num_examples: 15696
- name: validation
num_bytes: 1475216
num_examples: 2410
- name: test
num_bytes: 1212939
num_examples: 1939
download_size: 8747760
dataset_size: 12547915
- config_name: nynorsk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-OTH
'2': I-OTH
'3': E-OTH
'4': S-OTH
'5': B-ORG
'6': I-ORG
'7': E-ORG
'8': S-ORG
'9': B-PRS
'10': I-PRS
'11': E-PRS
'12': S-PRS
'13': B-GEO
'14': I-GEO
'15': E-GEO
'16': S-GEO
splits:
- name: train
num_bytes: 9916338
num_examples: 14174
- name: validation
num_bytes: 1257235
num_examples: 1890
- name: test
num_bytes: 1006733
num_examples: 1511
download_size: 8484545
dataset_size: 12180306
- config_name: samnorsk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-OTH
'2': I-OTH
'3': E-OTH
'4': S-OTH
'5': B-ORG
'6': I-ORG
'7': E-ORG
'8': S-ORG
'9': B-PRS
'10': I-PRS
'11': E-PRS
'12': S-PRS
'13': B-GEO
'14': I-GEO
'15': E-GEO
'16': S-GEO
splits:
- name: train
num_bytes: 22508485
num_examples: 34170
- name: validation
num_bytes: 2732419
num_examples: 4300
- name: test
num_bytes: 2219640
num_examples: 3450
download_size: 19133049
dataset_size: 27460544
---
# Dataset Card for Norwegian NER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/ljos/navnkjenner)
- **Repository:** [Github](https://github.com/ljos/navnkjenner)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jplu](https://github.com/jplu) for adding this dataset. | 6,623 | [
[
-0.038543701171875,
-0.02642822265625,
-0.0031909942626953125,
0.012786865234375,
-0.0256195068359375,
0.004032135009765625,
-0.0227508544921875,
-0.0247650146484375,
0.047760009765625,
0.049774169921875,
-0.05078125,
-0.0711669921875,
-0.039154052734375,
0.... |
alexantonov/chuvash_parallel | 2022-10-24T15:26:28.000Z | [
"multilinguality:translation",
"source_datasets:original",
"language:cv",
"region:us"
] | alexantonov | null | null | 4 | 48 | 2022-03-02T23:29:22 | ---
language:
- cv
multilinguality:
- translation
source_datasets:
- original
task_ids:
- machine-translation
---
# Dataset Description
## Chuvash-Russian parallel corpus
1M parallel sentences. Manually aligned
## Chuvash-English parallel corpus.
200K parallel sentences. Automatically aligned
## Contributions
For additional details contact [@AlAntonov](https://github.com/AlAntonov). | 392 | [
[
0.0012826919555664062,
-0.03350830078125,
0.011138916015625,
0.0168609619140625,
-0.016937255859375,
0.01474761962890625,
-0.007633209228515625,
-0.0048828125,
0.03704833984375,
0.03656005859375,
-0.034332275390625,
-0.035980224609375,
-0.025299072265625,
0.... |
aliabd/crowdsourced-speech4 | 2022-01-21T17:36:51.000Z | [
"region:us"
] | aliabd | null | null | 0 | 48 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
teven/c4_15M | 2021-12-06T03:44:05.000Z | [
"region:us"
] | teven | null | null | 1 | 48 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Team-PIXEL/rendered-wikipedia-english | 2022-08-02T14:01:21.000Z | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"license:gfdl",
"arxiv:2207.06991",
"region:us"
] | Team-PIXEL | null | null | 2 | 48 | 2022-05-11T14:52:06 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- monolingual
pretty_name: Team-PIXEL/rendered-wikipedia-english
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- masked-auto-encoding
- rendered-language-modelling
task_ids:
- masked-auto-encoding
- rendered-language-modeling
paperswithcode_id: null
---
# Dataset Card for Team-PIXEL/rendered-wikipedia-english
## Dataset Description
- **Homepage:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Repository:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Paper:** [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991)
- **Point of Contact:** [Phillip Rust](mailto:p.rust@di.ku.dk)
- **Size of downloaded dataset files:** 125.66 GB
- **Size of the generated dataset:** 125.56 GB
- **Total amount of disk used:** 251.22 GB
### Dataset Summary
This dataset contains the full English Wikipedia from February 1, 2018, rendered into images of 16x8464 resolution.
The original text dataset was built from a [Wikipedia dump](https://dumps.wikimedia.org/). Each example in the original *text* dataset contained the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.). Each *rendered* example contains a subset of one full article. This rendered English Wikipedia was used to train the [PIXEL](https://huggingface.co/Team-PIXEL/pixel-base) model introduced in the paper [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) by Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, and Desmond Elliott.
The original Wikipedia text dataset was rendered article-by-article into 11.4M examples containing approximately 2B words in total. The dataset is stored as a collection of 338 parquet files.
It was rendered using the script openly available at [https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_wikipedia.py](https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_wikipedia.py). The text renderer uses a PyGame backend and a collection of merged Google Noto Sans fonts. The PyGame backend does not support complex text layouts (e.g. ligatures and right-to-left scripts) or emoji, so occurrences of such text in the Wikipedia data have not been rendered accurately.
Each example consists of a "pixel_values" field which stores a 16x8464 (height, width) grayscale image containing the rendered text, and an integer value "num_patches" which stores how many image patches (when splitting the image into 529 non-overlapping patches of resolution 16x16 pixels) in the associated images contain actual text, i.e. are neither blank (fully white) nor are the fully black end-of-sequence patch.
You can load the dataset as follows:
```python
from datasets import load_dataset
# Download the full dataset to disk
load_dataset("Team-PIXEL/rendered-wikipedia-english", split="train")
# Stream the dataset directly from the hub
load_dataset("Team-PIXEL/rendered-wikipedia-english", split="train", streaming=True)
```
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 125.66 GB
- **Size of the generated dataset:** 125.56 GB
- **Total amount of disk used:** 251.22 GB
An example of 'train' looks as follows.
```
{
"pixel_values": <PIL.PngImagePlugin.PngImageFile image mode=L size=8464x16
"num_patches": "469"
}
```
### Data Fields
The data fields are the same among all splits.
- `pixel_values`: an `Image` feature.
- `num_patches`: a `Value(dtype="int64")` feature.
### Data Splits
|train|
|:----|
|11446535|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA) and the GNU Free Documentation License (GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes the text.
### Citation Information
```bibtex
@article{rust-etal-2022-pixel,
title={Language Modelling with Pixels},
author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott},
journal={arXiv preprint},
year={2022},
url={https://arxiv.org/abs/2207.06991}
}
```
### Contact Person
This dataset was added by Phillip Rust.
Github: [@xplip](https://github.com/xplip)
Twitter: [@rust_phillip](https://twitter.com/rust_phillip) | 6,572 | [
[
-0.05255126953125,
-0.050018310546875,
0.00888824462890625,
0.002658843994140625,
-0.0193023681640625,
-0.007221221923828125,
-0.034393310546875,
-0.038787841796875,
0.036468505859375,
0.0173492431640625,
-0.04736328125,
-0.0533447265625,
-0.0357666015625,
0... |
SetFit/wsc | 2022-06-10T13:59:09.000Z | [
"region:us"
] | SetFit | null | null | 0 | 48 | 2022-06-10T13:57:36 | # Glue WSC
This dataset is a port of the official [`wsc` dataset](https://huggingface.co/datasets/super_glue) on the Hub.
Also, the test split is not labeled; the label column values are always -1.
| 200 | [
[
-0.04730224609375,
-0.04608154296875,
-0.0052337646484375,
0.003986358642578125,
-0.01548004150390625,
0.0290374755859375,
0.0241241455078125,
-0.00952911376953125,
0.0814208984375,
0.0177001953125,
-0.0904541015625,
-0.033660888671875,
-0.02685546875,
-0.00... |
Bingsu/namuwiki_20210301_filtered | 2022-10-14T07:49:53.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"langua... | Bingsu | null | null | 4 | 48 | 2022-07-14T02:18:12 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- ko
license:
- cc-by-nc-sa-2.0
multilinguality:
- monolingual
pretty_name: Namuwiki database dump (2021-03-01)
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
---
# Namuwiki database dump (2021-03-01)
## Dataset Description
- **Homepage:** [나무위키:데이터베이스 덤프](https://namu.wiki/w/%EB%82%98%EB%AC%B4%EC%9C%84%ED%82%A4:%EB%8D%B0%EC%9D%B4%ED%84%B0%EB%B2%A0%EC%9D%B4%EC%8A%A4%20%EB%8D%A4%ED%94%84)
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
## Namuwiki
https://namu.wiki/
It is a Korean wiki based on the seed engine, established on April 17, 2015 (KST).
## About dataset
All data from Namuwiki collected on 2021-03-01. I filtered data without text(mostly redirecting documents).
You can download the original data converted to csv in [Kaggle](https://www.kaggle.com/datasets/brainer3220/namu-wiki).
## 2022-03-01 dataset
[heegyu/namuwiki](https://huggingface.co/datasets/heegyu/namuwiki)<br>
[heegyu/namuwiki-extracted](https://huggingface.co/datasets/heegyu/namuwiki-extracted)<br>
[heegyu/namuwiki-sentences](https://huggingface.co/datasets/heegyu/namuwiki-sentences)
### Lisence
[CC BY-NC-SA 2.0 KR](https://creativecommons.org/licenses/by-nc-sa/2.0/kr/)
## Data Structure
### Data Instance
```pycon
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/namuwiki_20210301_filtered")
>>> dataset
DatasetDict({
train: Dataset({
features: ['title', 'text'],
num_rows: 571308
})
})
```
```pycon
>>> dataset["train"].features
{'title': Value(dtype='string', id=None),
'text': Value(dtype='string', id=None)}
```
### Data Size
download: 3.26 GiB<br>
generated: 3.73 GiB<br>
total: 6.99 GiB
### Data Field
- title: `string`
- text: `string`
### Data Splits
| | train |
| ---------- | ------ |
| # of texts | 571308 |
```pycon
>>> dataset["train"][2323]
{'title': '55번 지방도',
'text': '55번 국가지원지방도\n해남 ~ 금산\n시점 전라남도 해남군 북평면 남창교차로\n종점 충청남도 금산군 금산읍 우체국사거리\n총 구간 279.2km\n경유지 전라남도 강진군, 장흥군, 영암군 전라남도 나주시, 화순군 광주광역시 동구, 북구 전라남도 담양군 전라북도 순창군, 정읍시, 완주군 전라북도 임실군, 진안군\n개요\n국가지원지방도 제55호선은 전라남도 해남군에서 출발하여 충청남도 금산군까지 이어지는 대한민국의 국가지원지방도이다.\n전라남도 해남군 북평면 - 전라남도 강진군 도암면 구간은 광주광역시, 전라남도 동부권, 영남 지방에서 완도군 완도읍으로 갈 때 주로 이용된다.] 해남 - 완도구간이 확장되기 전에는 그랬다. 강진군, 장흥군은 예외]\n노선\n전라남도\n해남군\n백도로\n북평면 남창교차로에서 13번 국도, 77번 국도와 만나며 출발한다.\n쇄노재\n북일면 북일초교 앞에서 827번 지방도와 만난다.\n강진군\n백도로\n도암면소재지 사거리에서 819번 지방도와 만난다. 819번 지방도는 망호선착장까지만 길이 있으며, 뱃길을 통해 간접적으로 바다 건너의 819번 지방도와 연결된다.\n석문공원\n도암면 계라교차로에서 18번 국도에 합류한다. 우회전하자. 이후 강진읍까지 18번 국도와 중첩되고 장흥군 장흥읍까지 2번 국도와 중첩된다. 그리고 장흥읍부터 영암군을 거쳐 나주시 세지면까지는 23번 국도와 중첩된다.\n나주시\n동창로\n세지면 세지교차로에서 드디어 23번 국도로부터 분기하면서 820번 지방도와 직결 합류한다. 이 길은 2013년 현재 확장 공사 중이다. 확장공사가 완료되면 동창로가 55번 지방도 노선이 된다.\n세남로\n봉황면 덕림리 삼거리에서 820번 지방도와 분기한다.\n봉황면 철천리 삼거리에서 818번 지방도와 합류한다.\n봉황면 송현리 삼거리에서 818번 지방도와 분기한다.\n송림산제길\n동창로\n여기부터 완공된 왕복 4차로 길이다. 이 길을 만들면서 교통량이 늘어났지만 주변 농민들이 이용하는 농로의 교량을 설치하지 않아 문제가 생기기도 했다. #1 #2\n세남로\n남평읍에서 다시 왕복 2차로로 줄어든다.\n남평읍 남평오거리에서 822번 지방도와 만난다.\n산남로\n남평교를 건너고 남평교사거리에서 우회전\n동촌로\n남평역\n화순군\n동촌로\n화순읍 앵남리 삼거리에서 817번 지방도와 합류한다. 좌회전하자.\n앵남역\n지강로\n화순읍 앵남리 앵남교차로에서 817번 지방도와 분기한다. 앵남교차로부터 나주 남평읍까지 55번 지방도의 확장공사가 진행중이다.\n오성로\n여기부터 화순읍 대리사거리까지 왕복 4차선으로 확장 공사를 진행했고, 2015년 8월 말 화순읍 구간은 왕복 4차선으로 확장되었다.\n화순역\n화순읍에서 광주광역시 동구까지 22번 국도와 중첩되고, 동구부터 전라북도 순창군 쌍치면까지는 29번 국도와 중첩된다.\n전라북도\n순창군\n청정로\n29번 국도를 따라가다가 쌍치면 쌍길매삼거리에서 우회전하여 21번 국도로 들어가자. 쌍치면 쌍치사거리에서 21번 국도와 헤어진다. 직진하자.\n정읍시\n청정로\n산내면 산내사거리에서 715번 지방도와 직결하면서 30번 국도에 합류한다. 좌회전하여 구절재를 넘자.\n산외로\n칠보면 시산교차로에서 49번 지방도와 교차되면 우회전하여 49번 지방도와 합류한다. 이제 오랜 시간 동안 49번 지방도와 합류하게 될 것이다.\n산외면 산외교차로에서 715번 지방도와 교차한다.\n엄재터널\n완주군\n산외로\n구이면 상용교차로에서 27번 국도에 합류한다. 좌회전하자.\n구이로\n구이면 백여교차로에서 27번 국도로부터 분기된다.\n구이면 대덕삼거리에서 714번 지방도와 만난다.\n구이면 염암삼거리에서 우회전\n신덕평로\n고개가 있다. 완주군과 임실군의 경계이다.\n임실군\n신덕평로\n신덕면 외량삼거리, 삼길삼거리에서 749번 지방도와 만난다.\n야트막한 고개가 하나 있다.\n신평면 원천리 원천교차로에서 745번 지방도와 교차한다.\n신평면 관촌역 앞에서 17번 국도와 합류한다. 좌회전하자.\n관진로\n관촌면 병암삼거리에서 17번 국도로부터 분기된다.\n순천완주고속도로와 교차되나 연결되지 않는다.\n진안군\n관진로\n성수면 좌산리에서 721번 지방도와 만난다.\n성수면 좌산리 좌산삼거리에서 721번 지방도와 만난다.\n마령면 강정교차로 부근에서 745번 지방도와 만난다.\n익산포항고속도로와 교차되나 연결되지 않는다.\n진안읍 진안연장농공단지 앞에서 26번 국도에 합류한다. 좌회전하자.\n전진로\n부귀면 부귀교차로에서 드디어 49번 지방도를 떠나보낸다. 그러나 아직 26번 국도와 중첩된다.\n완주군\n동상로\n드디어 55번이라는 노선 번호가 눈에 보이기 시작한다. 완주군 소양면에서 26번 국도와 분기된다. 이제부터 꼬불꼬불한 산길이므로 각오하고 운전하자.\n밤치. 소양면과 동상면의 경계가 되는 고개다.\n동상면 신월삼거리에서 732번 지방도와 만난다. 동상저수지에 빠지지 않도록 주의하자.\n동상주천로\n운장산고개를 올라가야 한다. 완주군과 진안군의 경계다. 고개 정상에 휴게소가 있다.\n진안군\n동상주천로\n주천면 주천삼거리에서 725번 지방도와 만난다.\n충청남도\n금산군\n보석사로\n남이면 흑암삼거리에서 635번 지방도와 만난다. 우회전해야 한다. 네이버 지도에는 좌회전해서 좀더 가면 나오는 길을 55번 지방도라고 써놓았는데, 잘못 나온 거다. 다음 지도에는 올바르게 나와있다.\n십이폭포로\n남이면에서 남일면으로 넘어간다.\n남일면에서 13번 국도와 합류한다. 좌회전하자. 이후 구간은 남이면을 거쳐 금산읍까지 13번 국도와 중첩되면서 55번 지방도 구간은 종료된다.'}
```
| 4,931 | [
[
-0.04718017578125,
-0.044464111328125,
0.0214385986328125,
0.027862548828125,
-0.034454345703125,
-0.01003265380859375,
0.0208740234375,
-0.0281524658203125,
0.050262451171875,
0.024871826171875,
-0.02728271484375,
-0.0225677490234375,
-0.049835205078125,
0.... |
allenai/csabstruct | 2022-11-02T17:54:38.000Z | [
"license:apache-2.0",
"arxiv:1909.04054",
"region:us"
] | allenai | As a step toward better document-level understanding, we explore classification of a sequence of sentences into their corresponding categories, a task that requires understanding sentences in context of the document. Recent successful models for this task have used hierarchical models to contextualize sentence representations, and Conditional Random Fields (CRFs) to incorporate dependencies between subsequent labels. In this work, we show that pretrained language models, BERT (Devlin et al., 2018) in particular, can be used for this task to capture contextual dependencies without the need for hierarchical encoding nor a CRF. Specifically, we construct a joint sentence representation that allows BERT Transformer layers to directly utilize contextual information from all words in all sentences. Our approach achieves state-of-the-art results on four datasets, including a new dataset of structured scientific abstracts. | @inproceedings{Cohan2019EMNLP,
title={Pretrained Language Models for Sequential Sentence Classification},
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
year={2019},
booktitle={EMNLP},
} | 2 | 48 | 2022-11-02T17:15:53 | ---
license: apache-2.0
---
# CSAbstruct
CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology][2], [arXiv][1], [GitHub][6]).
It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][3] categories.
## Dataset Construction Details
CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
The key difference between this dataset and [PUBMED-RCT][3] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
Therefore, there is more variety in writing styles in CSAbstruct.
CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)][4].
E4ch sentence is annotated by 5 workers on the [Figure-eight platform][5], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions.
A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
Compared with [PUBMED-RCT][3], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
## Dataset Statistics
| Statistic | Avg ± std |
|--------------------------|-------------|
| Doc length in sentences | 6.7 ± 1.99 |
| Sentence length in words | 21.8 ± 10.0 |
| Label | % in Dataset |
|---------------|--------------|
| `BACKGROUND` | 33% |
| `METHOD` | 32% |
| `RESULT` | 21% |
| `OBJECTIVE` | 12% |
| `OTHER` | 03% |
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{Cohan2019EMNLP,
title={Pretrained Language Models for Sequential Sentence Classification},
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
year={2019},
booktitle={EMNLP},
}
```
[1]: https://arxiv.org/abs/1909.04054
[2]: https://aclanthology.org/D19-1383
[3]: https://github.com/Franck-Dernoncourt/pubmed-rct
[4]: https://aclanthology.org/N18-3011/
[5]: https://www.figure-eight.com/
[6]: https://github.com/allenai/sequential_sentence_classification
| 2,896 | [
[
-0.0102081298828125,
-0.036895751953125,
0.03656005859375,
0.0245361328125,
-0.00885772705078125,
0.006542205810546875,
-0.0248260498046875,
-0.029022216796875,
0.01271820068359375,
0.0280609130859375,
-0.02691650390625,
-0.058807373046875,
-0.050323486328125,
... |
AndyReas/frontpage-news | 2023-04-28T14:32:11.000Z | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:mit",
"region:us"
] | AndyReas | null | null | 2 | 48 | 2023-03-01T12:26:53 | ---
license: mit
task_categories:
- text-generation
language:
- en
size_categories:
- 10M<n<100M
---
# Frontpage News
## The Data
The data consists of ~13,000,000 English articles from ~90 outlets. The articles were collected from the [Sciride News Mine](http://sciride.org/news.html), after which some additional cleaning / processing was performed on the data. The articles span from 2015-07-18 to 2020-10-17.
### Data processing
- Removing duplicate articles (a result of being on the frontpage for multiple days.)
- Removing repeated "outlet tags" appearing before or after headlines such as "| Daily Mail Online".
- Removing dates that were not part of a natural sentence but rather "tags", such as "\[Some headline\] - 2020-12-03".
- Removing duplicate articles (again. This time due to dates making otherwise identical articles unique. Removing the date made them 100% identical.)
- Removing HTML elements that were missed on the first scraping.
- Unescaping HTML characters, replacing them with "regular" characters.
- Removing "junk" articles such as empty articles and articles with a length below a certain threshold.
Note: the cleaning process was not perfect and some "outlet tags" still remain.
For instance, some outlets use "--" instead of "|" before a tag, and those were missed.
There is also the case of uncommon characters, such as "\u00a" being used instead of regular characters. This specific example results in tokenizers not being able to properly tokenize sentences using that space.
There are possibly (likely) other things, that were overlooked during cleaning.
### Outlets
```
["9news.com.au", "abc.net.au", "abcnews.go.com", "afr.com", "aljazeera.com", "apnews.com", "bbc.com", "bostonglobe.com", "breakingnews.ie", "breitbart.com", "businessinsider.com", "cbc.ca", "cbsnews.com", "channel4.com", "chicagotribune.com", "cnbc.com", "csmonitor.com", "ctvnews.ca", "dailymail.co.uk", "dailystar.co.uk", "dw.com", "economist.com", "edition.cnn.com", "euronews.com", "express.co.uk", "foxnews.com", "france24.com", "globalnews.ca", "huffpost.com", "independent.co.uk", "independent.ie", "inquirer.com", "irishexaminer.com", "irishmirror.ie", "irishtimes.com", "itv.com", "latimes.com", "liverpoolecho.co.uk", "macleans.ca", "metro.co.uk", "mirror.co.uk", "montrealgazette.com", "morningstaronline.co.uk", "msnbc.com", "nbcnews.com", "news.com.au", "news.sky.com", "news.yahoo.com", "newshub.co.nz", "newsweek.com", "npr.org", "nypost.com", "nytimes.com", "nzherald.co.nz", "politico.com", "rcinet.ca", "reuters.com", "rfi.fr", "rnz.co.nz", "rt.com", "rte.ie", "sbs.com.au", "scoop.co.nz", "scotsman.com", "slate.com", "smh.com.au", "standard.co.uk", "stuff.co.nz", "telegraph.co.uk", "theage.com.au", "theatlantic.com", "theglobeandmail.com", "theguardian.com", "thehill.com", "thejournal.ie", "thestar.com", "thesun.co.uk", "thesun.ie", "thetimes.co.uk", "thewest.com.au", "time.com", "torontosun.com", "upi.com", "usatoday.com", "vancouversun.com", "walesonline.co.uk", "washingtonpost.com", "washingtontimes.com", "westernjournal.com", "wnd.com", "wsj.com"]
```
## Features (columns)
### title
A news headline.
### description
A news subheader.
### meta
- article_id: Article ID from the original sciride news mine. A hashing of the original title + description.
- date: The date on which the article appeared on the frontpage.
- outlet: The outlet which published the article on their frontpage.
### new_article_id
A new article ID created by hashing the title + description. Can be different from article_id because titles and descriptions changed during "cleaning". | 3,611 | [
[
-0.0374755859375,
-0.026580810546875,
0.033294677734375,
0.0204010009765625,
-0.0156707763671875,
0.02203369140625,
-0.0007948875427246094,
-0.0198974609375,
0.0401611328125,
0.0198974609375,
-0.03515625,
-0.047088623046875,
-0.025115966796875,
0.03106689453... |
Nahrawy/VIDIT-Depth-ControlNet | 2023-05-06T17:54:43.000Z | [
"region:us"
] | Nahrawy | null | null | 0 | 48 | 2023-04-23T18:38:24 | ---
dataset_info:
features:
- name: scene
dtype: string
- name: image
dtype: image
- name: depth_map
dtype: image
- name: direction
dtype: string
- name: temprature
dtype: int32
- name: caption
dtype: string
splits:
- name: train
num_bytes: 20575644792.0
num_examples: 12000
download_size: 20108431280
dataset_size: 20575644792.0
---
# VIDIT Dataset
This is a version of the [VIDIT dataset](https://github.com/majedelhelou/VIDIT) equipped for training ControlNet using depth maps conditioning.
VIDIT includes 390 different Unreal Engine scenes, each captured with 40 illumination settings, resulting in 15,600 images. The illumination settings are all the combinations of 5 color temperatures (2500K, 3500K, 4500K, 5500K and 6500K) and 8 light directions (N, NE, E, SE, S, SW, W, NW). Original image resolution is 1024x1024.
We include in this version only the training split containing only 300 scenes.
Captions were generated using the [BLIP-2, Flan T5-xxl](https://huggingface.co/Salesforce/blip2-flan-t5-xxl) model.
Depth maps were generated using the [GLPN fine-tuned on NYUv2 ](https://huggingface.co/vinvino02/glpn-nyu) model.
## Examples with varying direction

## Examples with varying color temperature

## Disclaimer
I do not own any of this data.
| 1,401 | [
[
-0.0300445556640625,
-0.00408172607421875,
0.0210418701171875,
0.0222625732421875,
-0.033477783203125,
0.008392333984375,
-0.0140228271484375,
-0.02093505859375,
0.01389312744140625,
0.041015625,
-0.036834716796875,
-0.043365478515625,
-0.018646240234375,
-0... |
abokbot/wikipedia-first-paragraph | 2023-06-04T10:58:32.000Z | [
"language:en",
"wikipedia",
"region:us"
] | abokbot | null | null | 0 | 48 | 2023-06-04T10:06:17 | ---
language:
- en
tags:
- wikipedia
---
# Dataset Description
This dataset contains the first paragraph of cleaned Wikipedia articles in English.
It was obtained by transorming the [Wikipedia](https://huggingface.co/datasets/wikipedia) "20220301.en" dataset as follows:
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20220301.en")["train"]
def get_first_paragraph(example):
example["text"] = example['text'].split('\n\n')[0]
return example
dataset = dataset.map(get_first_paragraph)
```
# Why use this dataset?
The size of the original English Wikipedia dataset is over 20GB. It takes 20min to load it on a Google Colab notebook and running computations on that dataset can be costly.
If you want to create a use case that mostly needs the information in the first paragraph of a Wikipedia article (which is the paragraph with the most important information), this 'wikipedia-first-paragraph' dataset is for you.
Its size is 1.39GB and it takes 5 min to load it on a Google colab notebook.
# How to load dataset
You can load it by runnning:
```python
from datasets import load_dataset
load_dataset("abokbot/wikipedia-first-paragraph")
```
# Dataset Structure
An example looks as follows:
```
{
'id': '12',
'url': 'https://en.wikipedia.org/wiki/Anarchism',
'title': 'Anarchism',
'text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects \
all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, \
which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, \
placed on the farthest left of the political spectrum, it is usually described alongside communalism \
and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement, and \
has a strong historical association with anti-capitalism and socialism.'
}
``` | 1,976 | [
[
-0.0460205078125,
-0.057830810546875,
0.0002505779266357422,
0.0163421630859375,
-0.027008056640625,
-0.0235748291015625,
-0.0231475830078125,
-0.005218505859375,
0.047698974609375,
0.01410675048828125,
-0.04766845703125,
-0.0308685302734375,
-0.0249481201171875... |
Patt/RTE_TH | 2023-06-14T16:51:34.000Z | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | 0 | 48 | 2023-06-12T11:40:00 | ---
task_categories:
- text-classification
language:
- en
- th
---
# Dataset Card for RTE_TH
### Dataset Description
This dataset is Thai translated version of [RTE](https://huggingface.co/datasets/super_glue/viewer/rte) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation. | 368 | [
[
-0.00815582275390625,
-0.04998779296875,
-0.006114959716796875,
0.034698486328125,
-0.044708251953125,
-0.00534820556640625,
-0.0156707763671875,
-0.0182342529296875,
0.046356201171875,
0.04254150390625,
-0.035125732421875,
-0.058258056640625,
-0.041748046875,
... |
openaccess-ai-collective/chatbot-arena-elo-scores | 2023-06-23T19:57:16.000Z | [
"region:us"
] | openaccess-ai-collective | null | null | 0 | 48 | 2023-06-17T23:31:18 | ---
dataset_info:
features:
- name: elo_score
dtype: float64
- name: chatbot_name
dtype: string
splits:
- name: train
num_bytes: 359
num_examples: 14
download_size: 1669
dataset_size: 359
---
# Dataset Card for "chatbot-arena-elo-scores"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 400 | [
[
-0.031219482421875,
-0.03314208984375,
-0.0022792816162109375,
0.022064208984375,
-0.004711151123046875,
0.00859832763671875,
0.006046295166015625,
-0.01552581787109375,
0.0606689453125,
0.030853271484375,
-0.0531005859375,
-0.058685302734375,
-0.032470703125,
... |
KaiLv/UDR_Amazon | 2023-06-21T12:23:17.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 48 | 2023-06-21T12:22:34 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: headline
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 13936883
num_examples: 30000
- name: test
num_bytes: 1382953
num_examples: 3000
- name: debug
num_bytes: 2318411
num_examples: 5000
download_size: 11799872
dataset_size: 17638247
---
# Dataset Card for "UDR_Amazon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 584 | [
[
-0.042388916015625,
-0.01540374755859375,
0.00199127197265625,
0.0159454345703125,
-0.0189666748046875,
0.0105743408203125,
0.038787841796875,
-0.0146636962890625,
0.038543701171875,
0.047515869140625,
-0.053497314453125,
-0.0517578125,
-0.0280609130859375,
... |
Abdelkareem/arabic_tweets_classification | 2023-07-09T10:01:29.000Z | [
"region:us"
] | Abdelkareem | null | null | 0 | 48 | 2023-07-09T09:58:56 | ---
dataset_info:
features:
- name: Date
dtype: string
- name: Time
dtype: string
- name: Date Time
dtype: string
- name: URL
dtype: string
- name: Tweet Text
dtype: string
- name: Cleaned Text
dtype: string
- name: User Name
dtype: string
- name: Location
dtype: string
- name: 'Replied Tweet ID '
dtype: float64
- name: Replied Tweet User ID
dtype: float64
- name: Replied Tweet User name
dtype: string
- name: Coordinates
dtype: float64
- name: Retweet Count
dtype: float64
- name: Favorite Count
dtype: int64
- name: Favorited
dtype: string
- name: Label
dtype: string
splits:
- name: train
num_bytes: 7469621
num_examples: 13240
download_size: 3109198
dataset_size: 7469621
---
# Dataset Card for "arabic_tweets_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 979 | [
[
-0.0275115966796875,
-0.0128631591796875,
0.0036830902099609375,
0.0221099853515625,
-0.0227813720703125,
0.0242767333984375,
0.00608062744140625,
-0.012603759765625,
0.0440673828125,
0.0186309814453125,
-0.0430908203125,
-0.08697509765625,
-0.07061767578125,
... |
ds4sd/DocLayNet-v1.1 | 2023-09-01T09:58:52.000Z | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"size_categories:10K<n<100K",
"license:other",
"layout-segmentation",
"COCO",
"document-understanding",
"PDF",
"region:us"
] | ds4sd | null | null | 1 | 48 | 2023-08-17T13:10:53 | ---
annotations_creators:
- crowdsourced
license: other
pretty_name: DocLayNet
size_categories:
- 10K<n<100K
tags:
- layout-segmentation
- COCO
- document-understanding
- PDF
task_categories:
- object-detection
- image-segmentation
task_ids:
- instance-segmentation
dataset_info:
features:
- name: image
dtype: image
- name: bboxes
sequence:
sequence: float64
- name: category_id
sequence: int64
- name: segmentation
sequence:
sequence:
sequence: float64
- name: area
sequence: float64
- name: pdf_cells
list:
list:
- name: bbox
sequence: float64
- name: font
struct:
- name: color
sequence: int64
- name: name
dtype: string
- name: size
dtype: float64
- name: text
dtype: string
- name: metadata
struct:
- name: coco_height
dtype: int64
- name: coco_width
dtype: int64
- name: collection
dtype: string
- name: doc_category
dtype: string
- name: image_id
dtype: int64
- name: num_pages
dtype: int64
- name: original_filename
dtype: string
- name: original_height
dtype: float64
- name: original_width
dtype: float64
- name: page_hash
dtype: string
- name: page_no
dtype: int64
splits:
- name: train
num_bytes: 28172005254.125
num_examples: 69375
- name: test
num_bytes: 1996179229.125
num_examples: 4999
- name: val
num_bytes: 2493896901.875
num_examples: 6489
download_size: 7766115331
dataset_size: 32662081385.125
---
# Dataset Card for DocLayNet v1.1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
## Dataset Structure
This dataset is structured differently from the other repository [ds4sd/DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet), as this one includes the content (PDF cells) of the detections, and abandons the COCO format.
* `image`: page PIL image.
* `bboxes`: a list of layout bounding boxes.
* `category_id`: a list of class ids corresponding to the bounding boxes.
* `segmentation`: a list of layout segmentation polygons.
* `pdf_cells`: a list of lists corresponding to `bbox`. Each list contains the PDF cells (content) inside the bbox.
* `metadata`: page and document metadetails.
Bounding boxes classes / categories:
```
1: Caption
2: Footnote
3: Formula
4: List-item
5: Page-footer
6: Page-header
7: Picture
8: Section-header
9: Table
10: Text
11: Title
```
The `["metadata"]["doc_category"]` field uses one of the following constants:
```
* financial_reports,
* scientific_articles,
* laws_and_regulations,
* government_tenders,
* manuals,
* patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
``` | 6,377 | [
[
-0.04205322265625,
-0.026885986328125,
0.032684326171875,
0.004543304443359375,
-0.01299285888671875,
-0.01172637939453125,
0.0027446746826171875,
-0.0209503173828125,
0.020111083984375,
0.039337158203125,
-0.035430908203125,
-0.06695556640625,
-0.03775024414062... |
tim9510019/llama2_QA_Economics_230915 | 2023-11-03T00:42:13.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:mit",
"finance",
"region:us"
] | tim9510019 | null | null | 2 | 48 | 2023-09-15T11:09:29 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: input
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: int64
- name: Date
dtype: timestamp[ns]
- name: Type
dtype: int64
- name: Prompt
dtype: int64
- name: QuestionTokenNum
dtype: int64
- name: inputTokenNum
dtype: int64
- name: AnswerTokenNum
dtype: int64
splits:
- name: train
num_bytes: 2361707
num_examples: 422
download_size: 787216
dataset_size: 2361707
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- finance
---
# Dataset Card for "llama2_QA_Economics_230915"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 892 | [
[
-0.02569580078125,
-0.0110931396484375,
0.0266876220703125,
0.0288238525390625,
-0.0189056396484375,
0.0016927719116210938,
0.030548095703125,
-0.0100555419921875,
0.058441162109375,
0.0303497314453125,
-0.044830322265625,
-0.040435791015625,
-0.022186279296875,... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.