id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
nlphuji/mscoco_2014_5k_test_image_text_retrieval | 2023-01-18T00:08:42.000Z | [
"arxiv:1405.0312",
"region:us"
] | nlphuji | null | null | null | 2 | 1,010 | # MSCOCO (5K test set)
Original paper: [Microsoft COCO: Common Objects in Context
](https://arxiv.org/abs/1405.0312)
Homepage: https://cocodataset.org/#home
5K test set split from: http://cs.stanford.edu/people/karpathy/deepimagesent/caption_datasets.zip
Bibtex:
```
@inproceedings{lin2014microsoft,
title={Microsoft coco: Common objects in context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
booktitle={European conference on computer vision},
pages={740--755},
year={2014},
organization={Springer}
}
``` |
squadshifts | 2023-04-05T13:40:47.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | null | null | @InProceedings{pmlr-v119-miller20a,
title = {The Effect of Natural Distribution Shift on Question Answering Models},
author = {Miller, John and Krauth, Karl and Recht, Benjamin and Schmidt, Ludwig},
booktitle = {Proceedings of the 37th International Conference on Machine Learning},
pages = {6905--6916},
year = {2020},
editor = {III, Hal Daumé and Singh, Aarti},
volume = {119},
series = {Proceedings of Machine Learning Research},
month = {13--18 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v119/miller20a/miller20a.pdf},
url = {https://proceedings.mlr.press/v119/miller20a.html},
} | null | 3 | 1,007 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: SQuAD-shifts
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: squad-shifts
dataset_info:
- config_name: new_wiki
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 7865203
num_examples: 7938
download_size: 16505623
dataset_size: 7865203
- config_name: nyt
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 10792550
num_examples: 10065
download_size: 16505623
dataset_size: 10792550
- config_name: reddit
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 9473946
num_examples: 9803
download_size: 16505623
dataset_size: 9473946
- config_name: amazon
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 9445004
num_examples: 9885
download_size: 16505623
dataset_size: 9445004
---
# Dataset Card for "squadshifts"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://modestyachts.github.io/squadshifts-website/index.html](https://modestyachts.github.io/squadshifts-website/index.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 66.02 MB
- **Size of the generated dataset:** 37.56 MB
- **Total amount of disk used:** 103.58 MB
### Dataset Summary
SquadShifts consists of four new test sets for the Stanford Question Answering Dataset (SQuAD) from four different domains: Wikipedia articles, New York \
Times articles, Reddit comments, and Amazon product reviews. Each dataset was generated using the same data generating pipeline, Amazon Mechanical Turk interface, and data cleaning code as the original SQuAD v1.1 dataset. The "new-wikipedia" dataset measures overfitting on the original SQuAD v1.1 dataset. The "new-york-times", "reddit", and "amazon" datasets measure robustness to natural distribution shifts. We encourage SQuAD model developers to also evaluate their methods on these new datasets!
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### amazon
- **Size of downloaded dataset files:** 16.50 MB
- **Size of the generated dataset:** 9.44 MB
- **Total amount of disk used:** 25.94 MB
An example of 'test' looks as follows.
```
{
"answers": {
"answer_start": [25],
"text": ["amazon"]
},
"context": "This is a paragraph from amazon.",
"id": "090909",
"question": "Where is this paragraph from?",
"title": "amazon dummy data"
}
```
#### new_wiki
- **Size of downloaded dataset files:** 16.50 MB
- **Size of the generated dataset:** 7.86 MB
- **Total amount of disk used:** 24.37 MB
An example of 'test' looks as follows.
```
{
"answers": {
"answer_start": [25],
"text": ["wikipedia"]
},
"context": "This is a paragraph from wikipedia.",
"id": "090909",
"question": "Where is this paragraph from?",
"title": "new_wiki dummy data"
}
```
#### nyt
- **Size of downloaded dataset files:** 16.50 MB
- **Size of the generated dataset:** 10.79 MB
- **Total amount of disk used:** 27.29 MB
An example of 'test' looks as follows.
```
{
"answers": {
"answer_start": [25],
"text": ["new york times"]
},
"context": "This is a paragraph from new york times.",
"id": "090909",
"question": "Where is this paragraph from?",
"title": "nyt dummy data"
}
```
#### reddit
- **Size of downloaded dataset files:** 16.50 MB
- **Size of the generated dataset:** 9.47 MB
- **Total amount of disk used:** 25.97 MB
An example of 'test' looks as follows.
```
{
"answers": {
"answer_start": [25],
"text": ["reddit"]
},
"context": "This is a paragraph from reddit.",
"id": "090909",
"question": "Where is this paragraph from?",
"title": "reddit dummy data"
}
```
### Data Fields
The data fields are the same among all splits.
#### amazon
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### new_wiki
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### nyt
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### reddit
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |test |
|--------|----:|
|amazon | 9885|
|new_wiki| 7938|
|nyt |10065|
|reddit | 9803|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
All the datasets are distributed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode) license.
### Citation Information
```
@InProceedings{pmlr-v119-miller20a,
title = {The Effect of Natural Distribution Shift on Question Answering Models},
author = {Miller, John and Krauth, Karl and Recht, Benjamin and Schmidt, Ludwig},
booktitle = {Proceedings of the 37th International Conference on Machine Learning},
pages = {6905--6916},
year = {2020},
editor = {III, Hal Daumé and Singh, Aarti},
volume = {119},
series = {Proceedings of Machine Learning Research},
month = {13--18 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v119/miller20a/miller20a.pdf},
url = {https://proceedings.mlr.press/v119/miller20a.html},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@millerjohnp](https://github.com/millerjohnp), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
skg/toxigen-data | 2022-06-20T11:12:11.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"arxiv:2203.09509",
"region:us"
] | skg | Toxigen is a large-scale dataset containing implicitly toxic and benign sentences mentioning 13 minority groups, and a tool to stress test a given off-the-shelf toxicity classifier. The dataset is generated using a large language model (GPT3). It is intended to be used for training classifiers that learn to detect subtle hate speech that includes no slurs or profanity. | @inproceedings{hartvigsen2022toxigen,
title={ToxiGen: A Large-Scale Machine-Generated Dataset for Implicit and Adversarial Hate Speech Detection},
author={Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022}
} | null | 18 | 1,004 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
languages:
- en-US
licenses: []
multilinguality:
- monolingual
pretty_name: ToxiGen
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for ToxiGen
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-instances)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Sign up for Data Access
To access ToxiGen, first fill out [this form](https://forms.office.com/r/r6VXX8f8vh).
## Dataset Description
- **Repository:** https://github.com/microsoft/toxigen
- **Paper:** https://arxiv.org/abs/2203.09509
- **Point of Contact #1:** [Tom Hartvigsen](tomh@mit.edu)
- **Point of Contact #2:** [Saadia Gabriel](skgabrie@cs.washington.edu)
### Dataset Summary
This dataset is for implicit hate speech detection. All instances were generated using GPT-3 and the methods described in [our paper](https://arxiv.org/abs/2203.09509).
### Languages
All text is written in English.
## Dataset Structure
### Data Fields
We release TOXIGEN as a dataframe with the following fields:
- **prompt** is the prompt used for **generation**.
- **generation** is the TOXIGEN generated text.
- **generation_method** denotes whether or not ALICE was used to generate the corresponding generation. If this value is ALICE, then ALICE was used, if it is TopK, then ALICE was not used.
- **prompt_label** is the binary value indicating whether or not the prompt is toxic (1 is toxic, 0 is benign).
- **group** indicates the target group of the prompt.
- **roberta_prediction** is the probability predicted by our corresponding RoBERTa model for each instance.
### Citation Information
```bibtex
@inproceedings{hartvigsen2022toxigen,
title={ToxiGen: A Large-Scale Machine-Generated Dataset for Implicit and Adversarial Hate Speech Detection},
author={Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022}
}
```
|
RussianNLP/russian_super_glue | 2023-06-19T12:23:49.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text-generation",
"task_ids:natural-language-inference",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"size_categories:10M<n<100M",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:ru",
"license:mit",
"glue",
"qa",
"superGLUE",
"NLI",
"reasoning",
"arxiv:2202.07791",
"region:us"
] | RussianNLP | Recent advances in the field of universal language models and transformers require the development of a methodology for
their broad diagnostics and testing for general intellectual skills - detection of natural language inference,
commonsense reasoning, ability to perform simple logical operations regardless of text subject or lexicon. For the first
time, a benchmark of nine tasks, collected and organized analogically to the SuperGLUE methodology, was developed from
scratch for the Russian language. We provide baselines, human level evaluation, an open-source framework for evaluating
models and an overall leaderboard of transformer models for the Russian language. | @article{shavrina2020russiansuperglue,
title={RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark},
author={Shavrina, Tatiana and Fenogenova, Alena and Emelyanov, Anton and Shevelev, Denis and Artemova,
Ekaterina and Malykh, Valentin and Mikhailov, Vladislav and Tikhonova, Maria and Chertok, Andrey and
Evlampiev, Andrey},
journal={arXiv preprint arXiv:2010.15925},
year={2020}
} | null | 15 | 994 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- ru
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
source_datasets:
- original
task_categories:
- text-classification
- question-answering
- zero-shot-classification
- text-generation
task_ids:
- natural-language-inference
- multi-class-classification
pretty_name: Russian SuperGLUE
language_bcp47:
- ru-RU
dataset_info:
- config_name: lidirus
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: knowledge
dtype: string
- name: lexical-semantics
dtype: string
- name: logic
dtype: string
- name: predicate-argument-structure
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: test
num_bytes: 470306
num_examples: 1104
download_size: 47118
dataset_size: 470306
- config_name: rcb
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: verb
dtype: string
- name: negation
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': contradiction
'2': neutral
splits:
- name: train
num_bytes: 199712
num_examples: 438
- name: validation
num_bytes: 97993
num_examples: 220
- name: test
num_bytes: 207031
num_examples: 438
download_size: 136700
dataset_size: 504736
- config_name: parus
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': choice1
'1': choice2
splits:
- name: train
num_bytes: 74467
num_examples: 400
- name: validation
num_bytes: 19397
num_examples: 100
- name: test
num_bytes: 93192
num_examples: 500
download_size: 57585
dataset_size: 187056
- config_name: muserc
features:
- name: paragraph
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: idx
struct:
- name: paragraph
dtype: int32
- name: question
dtype: int32
- name: answer
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 31651155
num_examples: 11950
- name: validation
num_bytes: 5964157
num_examples: 2235
- name: test
num_bytes: 19850930
num_examples: 7614
download_size: 1196720
dataset_size: 57466242
- config_name: terra
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: train
num_bytes: 1409243
num_examples: 2616
- name: validation
num_bytes: 161485
num_examples: 307
- name: test
num_bytes: 1713499
num_examples: 3198
download_size: 907346
dataset_size: 3284227
- config_name: russe
features:
- name: word
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: start1
dtype: int32
- name: start2
dtype: int32
- name: end1
dtype: int32
- name: end2
dtype: int32
- name: gold_sense1
dtype: int32
- name: gold_sense2
dtype: int32
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 6913280
num_examples: 19845
- name: validation
num_bytes: 2957491
num_examples: 8505
- name: test
num_bytes: 10046000
num_examples: 18892
download_size: 3806009
dataset_size: 19916771
- config_name: rwsd
features:
- name: text
dtype: string
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 132274
num_examples: 606
- name: validation
num_bytes: 87959
num_examples: 204
- name: test
num_bytes: 59051
num_examples: 154
download_size: 40508
dataset_size: 279284
- config_name: danetqa
features:
- name: question
dtype: string
- name: passage
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 2474006
num_examples: 1749
- name: validation
num_bytes: 1076455
num_examples: 821
- name: test
num_bytes: 1023062
num_examples: 805
download_size: 1293761
dataset_size: 4573523
- config_name: rucos
features:
- name: passage
dtype: string
- name: query
dtype: string
- name: entities
sequence: string
- name: answers
sequence: string
- name: idx
struct:
- name: passage
dtype: int32
- name: query
dtype: int32
splits:
- name: train
num_bytes: 160095378
num_examples: 72193
- name: validation
num_bytes: 16980563
num_examples: 7577
- name: test
num_bytes: 15535209
num_examples: 7257
download_size: 56208297
dataset_size: 192611150
tags:
- glue
- qa
- superGLUE
- NLI
- reasoning
---
# Dataset Card for [Russian SuperGLUE]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://russiansuperglue.com/
- **Repository:** https://github.com/RussianNLP/RussianSuperGLUE
- **Paper:** https://russiansuperglue.com/download/main_article
- **Leaderboard:** https://russiansuperglue.com/leaderboard/2
- **Point of Contact:** [More Information Needed]
### Dataset Summary
Modern universal language models and transformers such as BERT, ELMo, XLNet, RoBERTa and others need to be properly
compared and evaluated. In the last year, new models and methods for pretraining and transfer learning have driven
striking performance improvements across a range of language understanding tasks.
We offer testing methodology based on tasks, typically proposed for “strong AI” — logic, commonsense, reasoning.
Adhering to the GLUE and SuperGLUE methodology, we present a set of test tasks for general language understanding
and leaderboard models.
For the first time a complete test for Russian language was developed, which is similar to its English analog.
Many datasets were composed for the first time, and a leaderboard of models for the Russian language with comparable
results is also presented.
### Supported Tasks and Leaderboards
Supported tasks, barring a few additions, are equivalent to the original SuperGLUE tasks.
|Task Name|Equiv. to|
|----|---:|
|Linguistic Diagnostic for Russian|Broadcoverage Diagnostics (AX-b)|
|Russian Commitment Bank (RCB)|CommitmentBank (CB)|
|Choice of Plausible Alternatives for Russian language (PARus)|Choice of Plausible Alternatives (COPA)|
|Russian Multi-Sentence Reading Comprehension (MuSeRC)|Multi-Sentence Reading Comprehension (MultiRC)|
|Textual Entailment Recognition for Russian (TERRa)|Recognizing Textual Entailment (RTE)|
|Russian Words in Context (based on RUSSE)|Words in Context (WiC)|
|The Winograd Schema Challenge (Russian)|The Winograd Schema Challenge (WSC)|
|Yes/no Question Answering Dataset for the Russian (DaNetQA)|BoolQ|
|Russian Reading Comprehension with Commonsense Reasoning (RuCoS)|Reading Comprehension with Commonsense Reasoning (ReCoRD)|
### Languages
All tasks are in Russian.
## Dataset Structure
### Data Instances
Note that there are no labels in the `test` splits. This is signified by the `-1` value.
#### LiDiRus
- **Size of downloaded dataset files:** 0.05 MB
- **Size of the generated dataset:** 0.49 MB
- **Total amount of disk used:** 0.54 MB
An example of 'test' looks as follows
```
{
"sentence1": "Новая игровая консоль доступна по цене.",
"sentence2": "Новая игровая консоль недоступна по цене.",
"knowledge": "",
"lexical-semantics": "Morphological negation",
"logic": "Negation",
"predicate-argument-structure": "",
"idx": 10,
"label": 1
}
```
#### RCB
- **Size of downloaded dataset files:** 0.14 MB
- **Size of the generated dataset:** 0.53 MB
- **Total amount of disk used:** 0.67 MB
An example of 'train'/'dev' looks as follows
```
{
"premise": "— Пойдём пообедаем. Я с утра ничего не ел. Отель, как видишь, весьма посредственный, но мне сказали,
что в здешнем ресторане отлично готовят.",
"hypothesis": "В здешнем ресторане отлично готовят.",
"verb": "сказать",
"negation": "no_negation",
"idx": 10,
"label": 2
}
```
An example of 'test' looks as follows
```
{
"premise": "Я уверен, что вместе мы победим. Да, парламентское большинство думает иначе.",
"hypothesis": "Вместе мы проиграем.",
"verb": "думать",
"negation": "no_negation",
"idx": 10,
"label": -1
}
```
#### PARus
- **Size of downloaded dataset files:** 0.06 MB
- **Size of the generated dataset:** 0.20 MB
- **Total amount of disk used:** 0.245 MB
An example of 'train'/'dev' looks as follows
```
{
"premise": "Женщина чинила кран.",
"choice1": "Кран подтекал.",
"choice2": "Кран был выключен.",
"question": "cause",
"idx": 10,
"label": 0
}
```
An example of 'test' looks as follows
```
{
"premise": "Ребятам было страшно.",
"choice1": "Их вожатый рассказал им историю про призрака.",
"choice2": "Они жарили маршмеллоу на костре.",
"question": "cause",
"idx": 10,
"label": -1
}
```
#### MuSeRC
- **Size of downloaded dataset files:** 1.26 MB
- **Size of the generated dataset:** 59.77 MB
- **Total amount of disk used:** 61.87 MB
An example of 'train'/'dev' looks as follows
```
{
"paragraph": "(1) Но люди не могут существовать без природы, поэтому в парке стояли железобетонные скамейки —
деревянные моментально ломали. (2) В парке бегали ребятишки, водилась шпана, которая развлекалась игрой в карты,
пьянкой, драками, «иногда насмерть». (3) «Имали они тут и девок...» (4) Верховодил шпаной Артемка-мыло, с
вспененной белой головой. (5) Людочка сколько ни пыталась усмирить лохмотья на буйной голове Артемки, ничего у
неё не получалось. (6) Его «кудри, издали напоминавшие мыльную пену, изблизя оказались что липкие рожки из
вокзальной столовой — сварили их, бросили комком в пустую тарелку, так они, слипшиеся, неподъёмно и лежали.
(7) Да и не ради причёски приходил парень к Людочке. (8) Как только её руки становились занятыми ножницами
и расчёской, Артемка начинал хватать её за разные места. (9) Людочка сначала увёртывалась от хватких рук Артемки,
а когда не помогло, стукнула его машинкой по голове и пробила до крови, пришлось лить йод на голову «ухажористого
человека». (10) Артемка заулюлюкал и со свистом стал ловить воздух. (11) С тех пор «домогания свои хулиганские
прекратил», более того, шпане повелел Людочку не трогать.",
"question": "Как развлекались в парке ребята?",
"answer": "Развлекались игрой в карты, пьянкой, драками, снимали они тут и девок.",
"idx":
{
"paragraph": 0,
"question": 2,
"answer": 10
},
"label": 1
}
```
An example of 'test' looks as follows
```
{
"paragraph": "\"(1) Издательство Viking Press совместно с компанией TradeMobile выпустят мобильное приложение,
посвященное Анне Франк, передает The Daily Telegraph. (2) Программа будет включать в себя фрагменты из дневника
Анны, озвученные британской актрисой Хеленой Бонэм Картер. (3) Помимо этого, в приложение войдут фотографии
и видеозаписи, документы из архива Фонда Анны Франк, план здания в Амстердаме, где Анна с семьей скрывались от
нацистов, и факсимильные копии страниц дневника. (4) Приложение, которое получит название Anne Frank App, выйдет
18 октября. (5) Интерфейс программы будет англоязычным. (6) На каких платформах будет доступно Anne Frank App,
не уточняется. Анна Франк родилась в Германии в 1929 году. (7) Когда в стране начались гонения на евреев, Анна с
семьей перебрались в Нидерланды. (8) С 1942 года члены семьи Франк и еще несколько человек скрывались от нацистов
в потайных комнатах дома в Амстердаме, который занимала компания отца Анны. (9) В 1944 году группу по доносу
обнаружили гестаповцы. (10) Обитатели \"Убежища\" (так Анна называла дом в дневнике) были отправлены в концлагеря;
выжить удалось только отцу девочки Отто Франку. (11) Находясь в \"Убежище\", Анна вела дневник, в котором описывала
свою жизнь и жизнь своих близких. (12) После ареста книгу с записями сохранила подруга семьи Франк и впоследствии
передала ее отцу Анны. (13) Дневник был впервые опубликован в 1947 году. (14) Сейчас он переведен более
чем на 60 языков.\"",
"question": "Какая информация войдет в новой мобильное приложение?",
"answer": "Видеозаписи Анны Франк.",
"idx":
{
"paragraph": 0,
"question": 2,
"answer": 10
},
"label": -1
}
```
#### TERRa
- **Size of downloaded dataset files:** 0.93 MB
- **Size of the generated dataset:** 3.44 MB
- **Total amount of disk used:** 4.39 MB
An example of 'train'/'dev' looks as follows
```
{
"premise": "Музей, расположенный в Королевских воротах, меняет экспозицию. На смену выставке, рассказывающей об
истории ворот и их реставрации, придет «Аптека трех королей». Как рассказали в музее, посетители попадут в
традиционный интерьер аптеки.",
"hypothesis": "Музей закроется навсегда.",
"idx": 10,
"label": 1
}
```
An example of 'test' looks as follows
```
{
"premise": "Маршрутка полыхала несколько минут. Свидетели утверждают, что приезду пожарных салон «Газели» выгорел полностью. К счастью, пассажиров внутри не было, а водитель успел выскочить из кабины.",
"hypothesis": "Маршрутка выгорела.",
"idx": 10,
"label": -1
}
```
#### RUSSE
- **Size of downloaded dataset files:** 3.88 MB
- **Size of the generated dataset:** 20.97 MB
- **Total amount of disk used:** 25.17 MB
An example of 'train'/'dev' looks as follows
```
{
"word": "дух",
"sentence1": "Завертелась в доме веселая коловерть: праздничный стол, праздничный дух, шумные разговоры",
"sentence2": "Вижу: духи собралися / Средь белеющих равнин. // Бесконечны, безобразны, / В мутной месяца игре / Закружились бесы разны, / Будто листья в ноябре",
"start1": 68,
"start2": 6,
"end1": 72,
"end2": 11,
"gold_sense1": 3,
"gold_sense2": 4,
"idx": 10,
"label": 0
}
```
An example of 'test' looks as follows
```
{
"word": "доска",
"sentence1": "На 40-й день после трагедии в переходе была установлена мемориальная доска, надпись на которой гласит: «В память о погибших и пострадавших от террористического акта 8 августа 2000 года».",
"sentence2": "Фото с 36-летним миллиардером привлекло сеть его необычной фигурой при стойке на доске и кремом на лице.",
"start1": 69,
"start2": 81,
"end1": 73,
"end2": 85,
"gold_sense1": -1,
"gold_sense2": -1,
"idx": 10,
"label": -1
}
```
#### RWSD
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.29 MB
- **Total amount of disk used:** 0.320 MB
An example of 'train'/'dev' looks as follows
```
{
"text": "Женя поблагодарила Сашу за помощь, которую она оказала.",
"span1_index": 0,
"span2_index": 6,
"span1_text": "Женя",
"span2_text": "она оказала",
"idx": 10,
"label": 0
}
```
An example of 'test' looks as follows
```
{
"text": "Мод и Дора видели, как через прерию несутся поезда, из двигателей тянулись клубы черного дыма. Ревущие
звуки их моторов и дикие, яростные свистки можно было услышать издалека. Лошади убежали, когда они увидели
приближающийся поезд.",
"span1_index": 22,
"span2_index": 30,
"span1_text": "свистки",
"span2_text": "они увидели",
"idx": 10,
"label": -1
}
```
#### DaNetQA
- **Size of downloaded dataset files:** 1.36 MB
- **Size of the generated dataset:** 4.82 MB
- **Total amount of disk used:** 5.9 MB
An example of 'train'/'dev' looks as follows
```
{
"question": "Вреден ли алкоголь на первых неделях беременности?",
"passage": "А Бакингем-Хоуз и её коллеги суммировали последствия, найденные в обзорных статьях ранее. Частые случаи
задержки роста плода, результатом чего является укороченный средний срок беременности и сниженный вес при рождении.
По сравнению с нормальными детьми, дети 3-4-недельного возраста демонстрируют «менее оптимальную» двигательную
активность, рефлексы, и ориентацию в пространстве, а дети 4-6 лет показывают низкий уровень работы
нейроповеденческих функций, внимания, эмоциональной экспрессии, и развития речи и языка. Величина этих влияний
часто небольшая, частично в связи с независимыми переменными: включая употребление во время беременности
алкоголя/табака, а также факторы среды . У детей школьного возраста проблемы с устойчивым вниманием и контролем
своего поведения, а также незначительные с ростом, познавательными и языковыми способностями.",
"idx": 10,
"label": 1
}
```
An example of 'test' looks as follows
```
{
"question": "Вредна ли жесткая вода?",
"passage": "Различают временную жёсткость, обусловленную гидрокарбонатами кальция и магния Са2; Mg2, и постоянную
жёсткость, вызванную присутствием других солей, не выделяющихся при кипячении воды: в основном, сульфатов и
хлоридов Са и Mg. Жёсткая вода при умывании сушит кожу, в ней плохо образуется пена при использовании мыла.
Использование жёсткой воды вызывает появление осадка на стенках котлов, в трубах и т. п. В то же время,
использование слишком мягкой воды может приводить к коррозии труб, так как, в этом случае отсутствует
кислотно-щелочная буферность, которую обеспечивает гидрокарбонатная жёсткость. Потребление жёсткой или мягкой
воды обычно не является опасным для здоровья, однако есть данные о том, что высокая жёсткость способствует
образованию мочевых камней, а низкая — незначительно увеличивает риск сердечно-сосудистых заболеваний. Вкус
природной питьевой воды, например, воды родников, обусловлен именно присутствием солей жёсткости.",
"idx": 100,
"label": -1
}
```
#### RuCoS
- **Size of downloaded dataset files:** 56.62 MB
- **Size of the generated dataset:** 202.38 MB
- **Total amount of disk used:** 261.10 MB
An example of 'train'/'dev' looks as follows
```
{
"passage": "В Абхазии 24 августа на досрочных выборах выбирают нового президента. Кто бы ни стал победителем,
возможности его будут ограничены, говорят эксперты, опрошенные DW. В Абхазии 24 августа проходят досрочные выборы
президента не признанной международным сообществом республики. Толчком к их проведению стали массовые протесты в
конце мая 2014 года, в результате которых со своего поста был вынужден уйти действующий президент Абхазии Александр
Анкваб. Эксперты называют среди наиболее перспективных кандидатов находящегося в оппозиции политика Рауля Хаджимбу,
экс-главу службы безопасности Аслана Бжанию и генерала Мираба Кишмарию, исполняющего обязанности министра обороны.
У кого больше шансов\n\"Ставки делаются на победу Хаджимбы.\n@highlight\nВ Швеции задержаны двое граждан РФ в связи
с нападением на чеченского блогера\n@highlight\nТуризм в эпоху коронавируса: куда поехать? И ехать ли
вообще?\n@highlight\nКомментарий: Россия накануне эпидемии - виноватые назначены заранее",
"query": "Несмотря на то, что Кремль вложил много денег как в @placeholder, так и в Южную Осетию, об экономическом
восстановлении данных регионов говорить не приходится, считает Хальбах: \"Многие по-прежнему живут в
полуразрушенных домах и временных жилищах\".",
"entities":
[
"DW.",
"Абхазии ",
"Александр Анкваб.",
"Аслана Бжанию ",
"Мираба Кишмарию,",
"РФ ",
"Рауля Хаджимбу,",
"Россия ",
"Хаджимбы.",
"Швеции "
],
"answers":
[
"Абхазии"
],
"idx":
{
"passage": 500,
"query": 500
}
}
```
An example of 'test' looks as follows
```
{
"passage": "Почему и как изменится курс белорусского рубля? Какие инструменты следует предпочесть населению, чтобы
сохранить сбережения, DW рассказали финансовые аналитики Беларуси. На последних валютных торгах БВФБ 2015 года в
среду, 30 декабря, курс белорусского рубля к доллару - 18569, к евро - 20300, к российскому рублю - 255. В 2016
году белорусскому рублю пророчат падение как минимум на 12 процентов к корзине валют, к которой привязан его курс.
А чтобы избежать потерь, белорусам советуют диверсифицировать инвестиционные портфели. Чем обусловлены прогнозные
изменения котировок белорусского рубля, и какие финансовые инструменты стоит предпочесть, чтобы минимизировать риск
потерь?\n@highlight\nВ Германии за сутки выявлено более 100 новых заражений коронавирусом\n@highlight\nРыночные цены
на нефть рухнули из-за провала переговоров ОПЕК+\n@highlight\nВ Италии за сутки произошел резкий скачок смертей от
COVID-19",
"query": "Последнее, убежден аналитик, инструмент для узкого круга профессиональных инвесторов, культуры следить за
финансовым состоянием предприятий - такой, чтобы играть на рынке корпоративных облигаций, - в @placeholder пока нет.",
"entities":
[
"DW ",
"Беларуси.",
"Германии ",
"Италии ",
"ОПЕК+"
],
"answers": [],
"idx":
{
"passage": 500,
"query": 500
}
}
```
### Data Fields
#### LiDiRus
- `idx`: an `int32` feature
- `label`: a classification label, with possible values `entailment` (0), `not_entailment` (1)
- `sentence1`: a `string` feature
- `sentence2`: a `string` feature
- `knowledge`: a `string` feature with possible values `''`, `'World knowledge'`, `'Common sense'`
- `lexical-semantics`: a `string` feature
- `logic`: a `string` feature
- `predicate-argument-structure`: a `string` feature
#### RCB
- `idx`: an `int32` feature
- `label`: a classification label, with possible values `entailment` (0), `contradiction` (1), `neutral` (2)
- `premise`: a `string` feature
- `hypothesis`: a `string` feature
- `verb`: a `string` feature
- `negation`: a `string` feature with possible values `'no_negation'`, `'negation'`, `''`, `'double_negation'`
#### PARus
- `idx`: an `int32` feature
- `label`: a classification label, with possible values `choice1` (0), `choice2` (1)
- `premise`: a `string` feature
- `choice1`: a `string` feature
- `choice2`: a `string` feature
- `question`: a `string` feature with possible values `'cause'`, `'effect'`
#### MuSeRC
- `idx`: an `int32` feature
- `label` : a classification label, with possible values `false` (0) , `true` (1) (does the provided `answer` contain
a factual response to the `question`)
- `paragraph`: a `string` feature
- `question`: a `string` feature
- `answer`: a `string` feature
#### TERRa
- `idx`: an `int32` feature
- `label`: a classification label, with possible values `entailment` (0), `not_entailment` (1)
- `premise`: a `string` feature
- `hypothesis`: a `string` feature
#### RUSSE
- `idx`: an `int32` feature
- `label` : a classification label, with possible values `false` (0), `true` (1) (whether the given `word` used in the
same sense in both sentences)
- `word`: a `string` feature
- `sentence1`: a `string` feature
- `sentence2`: a `string` feature
- `gold_sense1`: an `int32` feature
- `gold_sense2`: an `int32` feature
- `start1`: an `int32` feature
- `start2`: an `int32` feature
- `end1`: an `int32` feature
- `end2`: an `int32` feature
#### RWSD
- `idx`: an `int32` feature
- `label` : a classification label, with possible values `false` (0), `true` (1) (whether the given spans are
coreferential)
- `text`: a `string` feature
- `span1_index`: an `int32` feature
- `span2_index`: an `int32` feature
- `span1_text`: a `string` feature
- `span2_text`: a `string` feature
#### DaNetQA
- `idx`: an `int32` feature
- `label` : a classification label, with possible values `false` (0), `true` (1) (yes/no answer to the `question` found
in the `passage`)
- `question`: a `string` feature
- `passage`: a `string` feature
#### RuCoS
- `idx`: an `int32` feature
- `passage`: a `string` feature
- `query`: a `string` feature
- `entities`: a `list of strings` feature
- `answers`: a `list of strings` feature
[More Information Needed]
### Data Splits
#### LiDiRus
| |test|
|---|---:|
|LiDiRus|1104|
#### RCB
| |train|validation|test|
|----|---:|----:|---:|
|RCB|438|220|438|
#### PARus
| |train|validation|test|
|----|---:|----:|---:|
|PARus|400|100|500|
#### MuSeRC
| |train|validation|test|
|----|---:|----:|---:|
|MuSeRC|500|100|322|
#### TERRa
| |train|validation|test|
|----|---:|----:|---:|
|TERRa|2616|307|3198|
#### RUSSE
| |train|validation|test|
|----|---:|----:|---:|
|RUSSE|19845|8508|18892|
#### RWSD
| |train|validation|test|
|----|---:|----:|---:|
|RWSD|606|204|154|
#### DaNetQA
| |train|validation|test|
|----|---:|----:|---:|
|DaNetQA|1749|821|805|
#### RuCoS
| |train|validation|test|
|----|---:|----:|---:|
|RuCoS|72193|7577|7257|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
All our datasets are published by MIT License.
### Citation Information
```
@article{shavrina2020russiansuperglue,
title={RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark},
author={Shavrina, Tatiana and Fenogenova, Alena and Emelyanov, Anton and Shevelev, Denis and Artemova, Ekaterina and Malykh, Valentin and Mikhailov, Vladislav and Tikhonova, Maria and Chertok, Andrey and Evlampiev, Andrey},
journal={arXiv preprint arXiv:2010.15925},
year={2020}
}
@misc{fenogenova2022russian,
title={Russian SuperGLUE 1.1: Revising the Lessons not Learned by Russian NLP models},
author={Alena Fenogenova and Maria Tikhonova and Vladislav Mikhailov and Tatiana Shavrina and Anton Emelyanov and Denis Shevelev and Alexandr Kukushkin and Valentin Malykh and Ekaterina Artemova},
year={2022},
eprint={2202.07791},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@slowwavesleep](https://github.com/slowwavesleep) for adding this dataset. |
open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v1 | 2023-09-22T13:40:02.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 994 | ---
pretty_name: Evaluation run of togethercomputer/GPT-JT-6B-v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [togethercomputer/GPT-JT-6B-v1](https://huggingface.co/togethercomputer/GPT-JT-6B-v1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-22T13:39:48.520000](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v1/blob/main/results_2023-09-22T13-39-48.520000.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.00041946308724832214,\n\
\ \"em_stderr\": 0.00020969854707829363,\n \"f1\": 0.04423657718120805,\n\
\ \"f1_stderr\": 0.0011409456494249344,\n \"acc\": 0.3324266847298275,\n\
\ \"acc_stderr\": 0.00819810174632109\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.00041946308724832214,\n \"em_stderr\": 0.00020969854707829363,\n\
\ \"f1\": 0.04423657718120805,\n \"f1_stderr\": 0.0011409456494249344\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.012130401819560273,\n \
\ \"acc_stderr\": 0.0030152942428909413\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6527229676400947,\n \"acc_stderr\": 0.013380909249751237\n\
\ }\n}\n```"
repo_url: https://huggingface.co/togethercomputer/GPT-JT-6B-v1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_22T13_39_48.520000
path:
- '**/details_harness|drop|3_2023-09-22T13-39-48.520000.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-22T13-39-48.520000.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_22T13_39_48.520000
path:
- '**/details_harness|gsm8k|5_2023-09-22T13-39-48.520000.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-22T13-39-48.520000.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:44:05.719684.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:44:05.719684.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:44:05.719684.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_22T13_39_48.520000
path:
- '**/details_harness|winogrande|5_2023-09-22T13-39-48.520000.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-22T13-39-48.520000.parquet'
- config_name: results
data_files:
- split: 2023_07_19T15_44_05.719684
path:
- results_2023-07-19T15:44:05.719684.parquet
- split: 2023_09_22T13_39_48.520000
path:
- results_2023-09-22T13-39-48.520000.parquet
- split: latest
path:
- results_2023-09-22T13-39-48.520000.parquet
---
# Dataset Card for Evaluation run of togethercomputer/GPT-JT-6B-v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/togethercomputer/GPT-JT-6B-v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [togethercomputer/GPT-JT-6B-v1](https://huggingface.co/togethercomputer/GPT-JT-6B-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T13:39:48.520000](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v1/blob/main/results_2023-09-22T13-39-48.520000.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.00041946308724832214,
"em_stderr": 0.00020969854707829363,
"f1": 0.04423657718120805,
"f1_stderr": 0.0011409456494249344,
"acc": 0.3324266847298275,
"acc_stderr": 0.00819810174632109
},
"harness|drop|3": {
"em": 0.00041946308724832214,
"em_stderr": 0.00020969854707829363,
"f1": 0.04423657718120805,
"f1_stderr": 0.0011409456494249344
},
"harness|gsm8k|5": {
"acc": 0.012130401819560273,
"acc_stderr": 0.0030152942428909413
},
"harness|winogrande|5": {
"acc": 0.6527229676400947,
"acc_stderr": 0.013380909249751237
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
HuggingFaceH4/testing_codealpaca_small | 2023-04-12T21:57:24.000Z | [
"region:us"
] | HuggingFaceH4 | null | null | null | 3 | 990 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 31503
num_examples: 100
- name: test
num_bytes: 29802
num_examples: 100
download_size: 44006
dataset_size: 61305
---
# Dataset Card for "testing_codealpaca_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vwxyzjn/lm-human-preferences | 2023-09-01T02:02:15.000Z | [
"license:mit",
"region:us"
] | vwxyzjn | null | null | null | 0 | 990 | ---
license: mit
---
|
open-llm-leaderboard/details_Fredithefish__RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4 | 2023-09-28T15:50:12.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 989 | ---
pretty_name: Evaluation run of Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4](https://huggingface.co/Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Fredithefish__RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-28T15:50:00.560199](https://huggingface.co/datasets/open-llm-leaderboard/details_Fredithefish__RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4/blob/main/results_2023-09-28T15-50-00.560199.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.018246644295302015,\n\
\ \"em_stderr\": 0.0013706682452812888,\n \"f1\": 0.0714765100671141,\n\
\ \"f1_stderr\": 0.0018411955158404013,\n \"acc\": 0.32543219642729987,\n\
\ \"acc_stderr\": 0.007862138879264232\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.018246644295302015,\n \"em_stderr\": 0.0013706682452812888,\n\
\ \"f1\": 0.0714765100671141,\n \"f1_stderr\": 0.0018411955158404013\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \
\ \"acc_stderr\": 0.0022675371022545044\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6440410418310971,\n \"acc_stderr\": 0.013456740656273959\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|arc:challenge|25_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_28T15_50_00.560199
path:
- '**/details_harness|drop|3_2023-09-28T15-50-00.560199.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-28T15-50-00.560199.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_28T15_50_00.560199
path:
- '**/details_harness|gsm8k|5_2023-09-28T15-50-00.560199.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-28T15-50-00.560199.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hellaswag|10_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T14:47:41.742069.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T14:47:41.742069.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T14:47:41.742069.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_28T15_50_00.560199
path:
- '**/details_harness|winogrande|5_2023-09-28T15-50-00.560199.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-28T15-50-00.560199.parquet'
- config_name: results
data_files:
- split: 2023_07_19T14_47_41.742069
path:
- results_2023-07-19T14:47:41.742069.parquet
- split: 2023_09_28T15_50_00.560199
path:
- results_2023-09-28T15-50-00.560199.parquet
- split: latest
path:
- results_2023-09-28T15-50-00.560199.parquet
---
# Dataset Card for Evaluation run of Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4](https://huggingface.co/Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Fredithefish__RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-28T15:50:00.560199](https://huggingface.co/datasets/open-llm-leaderboard/details_Fredithefish__RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4/blob/main/results_2023-09-28T15-50-00.560199.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.018246644295302015,
"em_stderr": 0.0013706682452812888,
"f1": 0.0714765100671141,
"f1_stderr": 0.0018411955158404013,
"acc": 0.32543219642729987,
"acc_stderr": 0.007862138879264232
},
"harness|drop|3": {
"em": 0.018246644295302015,
"em_stderr": 0.0013706682452812888,
"f1": 0.0714765100671141,
"f1_stderr": 0.0018411955158404013
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.0022675371022545044
},
"harness|winogrande|5": {
"acc": 0.6440410418310971,
"acc_stderr": 0.013456740656273959
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
hatexplain | 2023-01-25T14:31:48.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"hate-speech-detection",
"arxiv:2012.10289",
"arxiv:1703.04009",
"arxiv:1908.11049",
"arxiv:1812.01693",
"region:us"
] | null | Hatexplain is the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in the dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labelling decision (as hate, offensive or normal) is based. | @misc{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Binny Mathew and Punyajoy Saha and Seid Muhie Yimam and Chris Biemann and Pawan Goyal and Animesh Mukherjee},
year={2020},
eprint={2012.10289},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 5 | 988 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: hatexplain
pretty_name: hatexplain
tags:
- hate-speech-detection
dataset_info:
features:
- name: id
dtype: string
- name: annotators
sequence:
- name: label
dtype:
class_label:
names:
'0': hatespeech
'1': normal
'2': offensive
- name: annotator_id
dtype: int32
- name: target
sequence: string
- name: rationales
sequence:
sequence: int32
- name: post_tokens
sequence: string
config_name: plain_text
splits:
- name: train
num_bytes: 7114730
num_examples: 15383
- name: validation
num_bytes: 884940
num_examples: 1922
- name: test
num_bytes: 884784
num_examples: 1924
download_size: 12848091
dataset_size: 8884454
---
# Dataset Card for hatexplain
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/punyajoy/HateXplain/
- **Paper:** https://arxiv.org/abs/2012.10289
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Punyajoy Saha (punyajoys@iitkgp.ac.in)
### Dataset Summary
Hatexplain is the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in the dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labeling decision (as hate, offensive or normal) is based.
WARNING: This dataset contains content that are offensive and/or hateful in nature.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The language supported is English.
## Dataset Structure
### Data Instances
Sample Entry:
```
{
"id": "24198545_gab",
"annotators": [
{
"label": 0, # hatespeech
"annotator_id": 4,
"target": ["African"]
},
{
"label": 0, # hatespeech
"annotator_id": 3,
"target": ["African"]
},
{
"label": 2, # offensive
"annotator_id": 5,
"target": ["African"]
}
],
"rationales":[
[0,0,0,0,0,0,0,0,1,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
],
"post_tokens": ["and","this","is","why","i","end","up","with","nigger","trainee","doctors","who","can","not","speak","properly","lack","basic","knowledge","of","biology","it","truly","scary","if","the","public","only","knew"]
}
}
```
### Data Fields
:small_blue_diamond:post_id : Unique id for each post<br/>
:small_blue_diamond:annotators : The list of annotations from each annotator<br/>
:small_blue_diamond:annotators[label] : The label assigned by the annotator to this post. Possible values: `hatespeech` (0), `normal` (1) or `offensive` (2)<br/>
:small_blue_diamond:annotators[annotator_id] : The unique Id assigned to each annotator<br/>
:small_blue_diamond:annotators[target] : A list of target community present in the post<br/>
:small_blue_diamond:rationales : A list of rationales selected by annotators. Each rationales represents a list with values 0 or 1. A value of 1 means that the token is part of the rationale selected by the annotator. To get the particular token, we can use the same index position in "post_tokens"<br/>
:small_blue_diamond:post_tokens : The list of tokens representing the post which was annotated<br/>
### Data Splits
[Post_id_divisions](https://github.com/hate-alert/HateXplain/blob/master/Data/post_id_divisions.json) has a dictionary having train, valid and test post ids that are used to divide the dataset into train, val and test set in the ratio of 8:1:1.
## Dataset Creation
### Curation Rationale
The existing hate speech datasets do not provide human rationale which could justify the human reasoning behind their annotation process. This dataset allows researchers to move a step in this direction. The dataset provides token-level annotatoins for the annotation decision.
### Source Data
We collected the data from Twitter and Gab.
#### Initial Data Collection and Normalization
We combined the lexicon set provided by [Davidson 2017](https://arxiv.org/abs/1703.04009), [Ousidhoum 2019](https://arxiv.org/abs/1908.11049), and [Mathew 2019](https://arxiv.org/abs/1812.01693) to generate a single lexicon. We do not consider reposts and remove duplicates. We also ensure that the posts do not contain links, pictures, or videos as they indicate additional information that mightnot be available to the annotators. However, we do not exclude the emojis from the text as they might carry importantinformation for the hate and offensive speech labeling task.
#### Who are the source language producers?
The dataset is human generated using Amazon Mechanical Turk (AMT).
### Annotations
#### Annotation process
Each post in our dataset contains three types of annotations. First, whether the text is a hate speech, offensive speech, or normal. Second, the target communities in the text. Third, if the text is considered as hate speech, or offensive by majority of the annotators, we further ask the annotators to annotate parts of the text, which are words orphrases that could be a potential reason for the given annotation.
Before starting the annotation task, workers are explicitly warned that the annotation task displays some hateful or offensive content. We prepare instructions for workers that clearly explain the goal of the annotation task, how to annotate spans and also include a definition for each category. We provide multiple examples with classification, target community and span annotations to help the annotators understand the task.
#### Who are the annotators?
To ensure high quality dataset, we use built-in MTurk qualification requirements, namely the HITApproval Rate(95%) for all Requesters’ HITs and the Number of HITs Approved(5,000) requirements.
Pilot annotation: In the pilot task, each annotator was provided with 20 posts and they were required to do the hate/offensive speech classification as well as identify the target community (if any). In order to have a clear understanding of the task, they were provided with multiple examples along with explanations for the labelling process. The main purpose of the pilot task was to shortlist those annotators who were able to do the classification accurately. We also collected feedback from annotators to improve the main annotation task. A total of 621 annotators took part in the pilot task. Out of these, 253 were selected for the main task.
Main annotation: After the pilot annotation, once we had ascertained the quality of the annotators, we started with the main annotation task. In each round, we would select a batch of around 200 posts. Each post was annotated by three annotators, then majority voting was applied to decide the final label. The final dataset is composed of 9,055 posts from Twitter and 11,093 posts from Gab. The Krippendorff's alpha for the inter-annotator agreement is 0.46 which is higher than other hate speech datasets.
### Personal and Sensitive Information
The posts were anonymized by replacing the usernames with <user> token.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset could prove beneficial to develop models which are more explainable and less biased.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
The dataset has some limitations. First is the lack of external context. The dataset lacks any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Another issue is the focus on English language and lack of multilingual hate speech.
## Additional Information
### Dataset Curators
Binny Mathew - IIT Kharagpur, India
Punyajoy Saha - IIT Kharagpur, India
Seid Muhie Yimam - Universit ̈at Hamburg, Germany
Chris Biemann - Universit ̈at Hamburg, Germany
Pawan Goyal - IIT Kharagpur, India
Animesh Mukherjee - IIT Kharagpur, India
### Licensing Information
MIT License
### Citation Information
```bibtex
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Binny Mathew and Punyajoy Saha and Seid Muhie Yimam and Chris Biemann and Pawan Goyal and Animesh Mukherjee},
year={2021},
conference={AAAI conference on artificial intelligence}
}
### Contributions
Thanks to [@kushal2000](https://github.com/kushal2000) for adding this dataset. |
BeIR/climate-fever | 2022-10-23T06:04:48.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 1 | 988 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
indonlp/NusaX-senti | 2023-01-24T17:02:06.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ace",
"language:ban",
"language:bjn",
"language:bug",
"language:en",
"language:id",
"language:jv",
"language:mad",
"language:min",
"language:nij",
"language:su",
"language:bbc",
"license:cc-by-sa-4.0",
"arxiv:2205.15960",
"region:us"
] | indonlp | NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.
NusaX-Senti is a 3-labels (positive, neutral, negative) sentiment analysis dataset for 10 Indonesian local languages + Indonesian and English. | @misc{winata2022nusax,
title={NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages},
author={Winata, Genta Indra and Aji, Alham Fikri and Cahyawijaya,
Samuel and Mahendra, Rahmad and Koto, Fajri and Romadhony,
Ade and Kurniawan, Kemal and Moeljadi, David and Prasojo,
Radityo Eko and Fung, Pascale and Baldwin, Timothy and Lau,
Jey Han and Sennrich, Rico and Ruder, Sebastian},
year={2022},
eprint={2205.15960},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 3 | 986 | ---
pretty_name: NusaX-senti
annotations_creators:
- expert-generated
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
language:
- ace
- ban
- bjn
- bug
- en
- id
- jv
- mad
- min
- nij
- su
- bbc
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: lang
dtype: string
- name: label
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
---
# Dataset Card for NusaX-Senti
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub](https://github.com/IndoNLP/nusax/tree/main/datasets/sentiment)
- **Paper:** [EACL 2022](https://arxiv.org/abs/2205.15960)
- **Point of Contact:** [GitHub](https://github.com/IndoNLP/nusax/tree/main/datasets/sentiment)
### Dataset Summary
NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.
NusaX-Senti is a 3-labels (positive, neutral, negative) sentiment analysis dataset for 10 Indonesian local languages + Indonesian and English.
### Supported Tasks and Leaderboards
- Sentiment analysis for Indonesian languages
### Languages
- ace: acehnese,
- ban: balinese,
- bjn: banjarese,
- bug: buginese,
- eng: english,
- ind: indonesian,
- jav: javanese,
- mad: madurese,
- min: minangkabau,
- nij: ngaju,
- sun: sundanese,
- bbc: toba_batak,
## Dataset Creation
### Curation Rationale
There is a shortage of NLP research and resources for the Indonesian languages, despite the country having over 700 languages. With this in mind, we have created this dataset to support future research for the underrepresented languages in Indonesia.
### Source Data
#### Initial Data Collection and Normalization
NusaX-senti is a dataset for sentiment analysis in Indonesian that has been expertly translated by native speakers.
#### Who are the source language producers?
The data was produced by humans (native speakers).
### Annotations
#### Annotation process
NusaX-senti is derived from SmSA, which is the biggest publicly available dataset for Indonesian sentiment analysis. It comprises of comments and reviews from multiple online platforms. To ensure the quality of our dataset, we have filtered it by removing any abusive language and personally identifying information by manually reviewing all sentences. To ensure balance in the label distribution, we randomly picked 1,000 samples through stratified sampling and then translated them to the corresponding languages.
#### Who are the annotators?
Native speakers of both Indonesian and the corresponding languages.
Annotators were compensated based on the number of translated samples.
### Personal and Sensitive Information
Personal information is removed.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
NusaX is created from review text. These data sources may contain some bias.
### Other Known Limitations
No other known limitations
## Additional Information
### Licensing Information
CC-BY-SA 4.0.
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Please contact authors for any information on the dataset.
### Citation Information
```
@misc{winata2022nusax,
title={NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages},
author={Winata, Genta Indra and Aji, Alham Fikri and Cahyawijaya,
Samuel and Mahendra, Rahmad and Koto, Fajri and Romadhony,
Ade and Kurniawan, Kemal and Moeljadi, David and Prasojo,
Radityo Eko and Fung, Pascale and Baldwin, Timothy and Lau,
Jey Han and Sennrich, Rico and Ruder, Sebastian},
year={2022},
eprint={2205.15960},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@afaji](https://github.com/afaji) for adding this dataset.
|
Graphcore/vqa | 2022-10-25T08:41:02.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | Graphcore | VQA is a new dataset containing open-ended questions about images.
These questions require an understanding of vision, language and commonsense knowledge to answer. | @inproceedings{antol2015vqa,
title={Vqa: Visual question answering},
author={Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C Lawrence and Parikh, Devi},
booktitle={Proceedings of the IEEE international conference on computer vision},
pages={2425--2433},
year={2015}
} | null | 1 | 981 | ---
language:
- en
license:
- cc-by-4.0
---
|
alexandrainst/scandi-qa | 2023-01-16T13:51:25.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:mkqa",
"source_datasets:natural_questions",
"language:da",
"language:sv",
"language:no",
"license:cc-by-sa-4.0",
"region:us"
] | alexandrainst | ScandiQA is a dataset of questions and answers in the Danish, Norwegian, and Swedish
languages. All samples come from the Natural Questions (NQ) dataset, which is a large
question answering dataset from Google searches. The Scandinavian questions and answers
come from the MKQA dataset, where 10,000 NQ samples were manually translated into,
among others, Danish, Norwegian, and Swedish. However, this did not include a
translated context, hindering the training of extractive question answering models.
We merged the NQ dataset with the MKQA dataset, and extracted contexts as either "long
answers" from the NQ dataset, being the paragraph in which the answer was found, or
otherwise we extract the context by locating the paragraphs which have the largest
cosine similarity to the question, and which contains the desired answer.
Further, many answers in the MKQA dataset were "language normalised": for instance, all
date answers were converted to the format "YYYY-MM-DD", meaning that in most cases
these answers are not appearing in any paragraphs. We solve this by extending the MKQA
answers with plausible "answer candidates", being slight perturbations or translations
of the answer.
With the contexts extracted, we translated these to Danish, Swedish and Norwegian using
the DeepL translation service for Danish and Swedish, and the Google Translation
service for Norwegian. After translation we ensured that the Scandinavian answers do
indeed occur in the translated contexts.
As we are filtering the MKQA samples at both the "merging stage" and the "translation
stage", we are not able to fully convert the 10,000 samples to the Scandinavian
languages, and instead get roughly 8,000 samples per language. These have further been
split into a training, validation and test split, with the former two containing
roughly 750 samples. The splits have been created in such a way that the proportion of
samples without an answer is roughly the same in each split. | # @InProceedings{huggingface:dataset,
# title = {ScandiQA: A Scandinavian Question Answering Dataset},
# author={Dan Saattrup Nielsen},
# year={2022}
# }
# | null | 7 | 980 | ---
pretty_name: ScandiQA
language:
- da
- sv
- no
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- mkqa
- natural_questions
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for ScandiQA
## Dataset Description
- **Repository:** <https://github.com/alexandrainst/scandi-qa>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 69 MB
- **Size of the generated dataset:** 67 MB
- **Total amount of disk used:** 136 MB
### Dataset Summary
ScandiQA is a dataset of questions and answers in the Danish, Norwegian, and Swedish
languages. All samples come from the Natural Questions (NQ) dataset, which is a large
question answering dataset from Google searches. The Scandinavian questions and answers
come from the MKQA dataset, where 10,000 NQ samples were manually translated into,
among others, Danish, Norwegian, and Swedish. However, this did not include a
translated context, hindering the training of extractive question answering models.
We merged the NQ dataset with the MKQA dataset, and extracted contexts as either "long
answers" from the NQ dataset, being the paragraph in which the answer was found, or
otherwise we extract the context by locating the paragraphs which have the largest
cosine similarity to the question, and which contains the desired answer.
Further, many answers in the MKQA dataset were "language normalised": for instance, all
date answers were converted to the format "YYYY-MM-DD", meaning that in most cases
these answers are not appearing in any paragraphs. We solve this by extending the MKQA
answers with plausible "answer candidates", being slight perturbations or translations
of the answer.
With the contexts extracted, we translated these to Danish, Swedish and Norwegian using
the [DeepL translation service](https://www.deepl.com/pro-api?cta=header-pro-api) for
Danish and Swedish, and the [Google Translation
service](https://cloud.google.com/translate/docs/reference/rest/) for Norwegian. After
translation we ensured that the Scandinavian answers do indeed occur in the translated
contexts.
As we are filtering the MKQA samples at both the "merging stage" and the "translation
stage", we are not able to fully convert the 10,000 samples to the Scandinavian
languages, and instead get roughly 8,000 samples per language. These have further been
split into a training, validation and test split, with the latter two containing
roughly 750 samples. The splits have been created in such a way that the proportion of
samples without an answer is roughly the same in each split.
### Supported Tasks and Leaderboards
Training machine learning models for extractive question answering is the intended task
for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`) and Norwegian (`no`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 69 MB
- **Size of the generated dataset:** 67 MB
- **Total amount of disk used:** 136 MB
An example from the `train` split of the `da` subset looks as follows.
```
{
'example_id': 123,
'question': 'Er dette en test?',
'answer': 'Dette er en test',
'answer_start': 0,
'context': 'Dette er en testkontekst.',
'answer_en': 'This is a test',
'answer_start_en': 0,
'context_en': "This is a test context.",
'title_en': 'Train test'
}
```
### Data Fields
The data fields are the same among all splits.
- `example_id`: an `int64` feature.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `answer_start`: an `int64` feature.
- `context`: a `string` feature.
- `answer_en`: a `string` feature.
- `answer_start_en`: an `int64` feature.
- `context_en`: a `string` feature.
- `title_en`: a `string` feature.
### Data Splits
| name | train | validation | test |
|----------|------:|-----------:|-----:|
| da | 6311 | 749 | 750 |
| sv | 6299 | 750 | 749 |
| no | 6314 | 749 | 750 |
## Dataset Creation
### Curation Rationale
The Scandinavian languages does not have any gold standard question answering dataset.
This is not quite gold standard, but the fact both the questions and answers are all
manually translated, it is a solid silver standard dataset.
### Source Data
The original data was collected from the [MKQA](https://github.com/apple/ml-mkqa/) and
[Natural Questions](https://ai.google.com/research/NaturalQuestions) datasets from
Apple and Google, respectively.
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/).
|
fujiki/oasst1-89k-ja-reformat-v1 | 2023-10-07T16:36:18.000Z | [
"license:apache-2.0",
"region:us"
] | fujiki | null | null | null | 0 | 979 | ---
license: apache-2.0
dataset_info:
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: instructions
sequence: string
- name: responses
sequence: string
splits:
- name: train
num_bytes: 58992730
num_examples: 33919
download_size: 21655251
dataset_size: 58992730
---
|
darentang/sroie | 2021-12-09T15:11:29.000Z | [
"region:us"
] | darentang | https://arxiv.org/abs/2103.10213 | @article{2019,
title={ICDAR2019 Competition on Scanned Receipt OCR and Information Extraction},
url={http://dx.doi.org/10.1109/ICDAR.2019.00244},
DOI={10.1109/icdar.2019.00244},
journal={2019 International Conference on Document Analysis and Recognition (ICDAR)},
publisher={IEEE},
author={Huang, Zheng and Chen, Kai and He, Jianhua and Bai, Xiang and Karatzas, Dimosthenis and Lu, Shijian and Jawahar, C. V.},
year={2019},
month={Sep}
} | null | 1 | 978 | Entry not found |
Tevatron/wikipedia-trivia | 2021-09-13T23:34:51.000Z | [
"region:us"
] | Tevatron | null | @inproceedings{karpukhin-etal-2020-dense,
title = "Dense Passage Retrieval for Open-Domain Question Answering",
author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov,
Sergey and Chen, Danqi and Yih, Wen-tau",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.550",
doi = "10.18653/v1/2020.emnlp-main.550",
pages = "6769--6781",
} | null | 1 | 977 | Entry not found |
gpt3mix/sst2 | 2021-05-18T08:59:33.000Z | [
"region:us"
] | gpt3mix | null | null | null | 0 | 975 | Entry not found |
hans | 2023-04-05T10:06:58.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:1902.01007",
"region:us"
] | null | The HANS dataset is an NLI evaluation set that tests specific hypotheses about invalid heuristics that NLI models are likely to learn. | @article{DBLP:journals/corr/abs-1902-01007,
author = {R. Thomas McCoy and
Ellie Pavlick and
Tal Linzen},
title = {Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural
Language Inference},
journal = {CoRR},
volume = {abs/1902.01007},
year = {2019},
url = {http://arxiv.org/abs/1902.01007},
archivePrefix = {arXiv},
eprint = {1902.01007},
timestamp = {Tue, 21 May 2019 18:03:36 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1902-01007.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 3 | 970 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
paperswithcode_id: hans
pretty_name: Heuristic Analysis for NLI Systems
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': non-entailment
- name: parse_premise
dtype: string
- name: parse_hypothesis
dtype: string
- name: binary_parse_premise
dtype: string
- name: binary_parse_hypothesis
dtype: string
- name: heuristic
dtype: string
- name: subcase
dtype: string
- name: template
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 15916371
num_examples: 30000
- name: validation
num_bytes: 15893137
num_examples: 30000
download_size: 30947358
dataset_size: 31809508
---
# Dataset Card for "hans"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/tommccoy1/hans](https://github.com/tommccoy1/hans)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 30.94 MB
- **Size of the generated dataset:** 31.81 MB
- **Total amount of disk used:** 62.76 MB
### Dataset Summary
The HANS dataset is an NLI evaluation set that tests specific hypotheses about invalid heuristics that NLI models are likely to learn.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 30.94 MB
- **Size of the generated dataset:** 31.81 MB
- **Total amount of disk used:** 62.76 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `non-entailment` (1).
- `parse_premise`: a `string` feature.
- `parse_hypothesis`: a `string` feature.
- `binary_parse_premise`: a `string` feature.
- `binary_parse_hypothesis`: a `string` feature.
- `heuristic`: a `string` feature.
- `subcase`: a `string` feature.
- `template`: a `string` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|30000| 30000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/abs-1902-01007,
author = {R. Thomas McCoy and
Ellie Pavlick and
Tal Linzen},
title = {Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural
Language Inference},
journal = {CoRR},
volume = {abs/1902.01007},
year = {2019},
url = {http://arxiv.org/abs/1902.01007},
archivePrefix = {arXiv},
eprint = {1902.01007},
timestamp = {Tue, 21 May 2019 18:03:36 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1902-01007.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@TevenLeScao](https://github.com/TevenLeScao), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
GabeHD/pokemon-type-captions | 2022-10-23T04:40:59.000Z | [
"region:us"
] | GabeHD | null | null | null | 3 | 970 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 19372532.0
num_examples: 898
download_size: 0
dataset_size: 19372532.0
---
# Dataset Card for Pokémon type captions
Contains official artwork and type-specific caption for Pokémon #1-898 (Bulbasaur-Calyrex).
Each Pokémon is represented once by the default form from [PokéAPI](https://pokeapi.co/)
Each row contains `image` and `text` keys:
- `image` is a 475x475 PIL jpg of the Pokémon's official artwork.
- `text` is a label describing the Pokémon by its type(s)
## Attributions
_Images and typing information pulled from [PokéAPI](https://pokeapi.co/)_
_Based on the [Lambda Labs Pokémon Blip Captions Dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)_
|
banghua/tldr_reward_model_labeled | 2023-09-21T19:08:04.000Z | [
"region:us"
] | banghua | null | null | null | 0 | 968 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 300444471.0
num_examples: 176163
download_size: 177215543
dataset_size: 300444471.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tldr_reward_model_labeled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hendrycks/ethics | 2023-04-19T18:55:00.000Z | [
"language:en",
"license:mit",
"AI Alignment",
"arxiv:2008.02275",
"region:us"
] | hendrycks | A benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality. | @article{hendrycks2020aligning,
title={Aligning ai with shared human values},
author={Hendrycks, Dan and Burns, Collin and Basart, Steven and Critch, Andrew and Li, Jerry and Song, Dawn and Steinhardt, Jacob},
journal={arXiv preprint arXiv:2008.02275},
year={2020}
} | null | 6 | 965 | ---
license: mit
language: en
dataset_info:
- config_name: default
features:
- name: label
dtype: int64
- name: input
dtype: string
- config_name: commonsense
features:
- name: label
dtype: int32
- name: input
dtype: string
splits:
- name: train
num_bytes: 14429921
num_examples: 13910
- name: validation
num_bytes: 3148616
num_examples: 3885
- name: test
num_bytes: 3863068
num_examples: 3964
download_size: 21625153
dataset_size: 21441605
- config_name: deontology
features:
- name: label
dtype: int32
- name: scenario
dtype: string
- name: excuse
dtype: string
splits:
- name: train
num_bytes: 1854277
num_examples: 18164
- name: validation
num_bytes: 369318
num_examples: 3596
- name: test
num_bytes: 359268
num_examples: 3536
download_size: 2384007
dataset_size: 2582863
- config_name: justice
features:
- name: label
dtype: int32
- name: scenario
dtype: string
splits:
- name: train
num_bytes: 2423889
num_examples: 21791
- name: validation
num_bytes: 297935
num_examples: 2704
- name: test
num_bytes: 228008
num_examples: 2052
download_size: 2837375
dataset_size: 2949832
- config_name: utilitarianism
features:
- name: baseline
dtype: string
- name: less_pleasant
dtype: string
splits:
- name: train
num_bytes: 2186713
num_examples: 13737
- name: validation
num_bytes: 730391
num_examples: 4807
- name: test
num_bytes: 668429
num_examples: 4271
download_size: 3466564
dataset_size: 3585533
- config_name: virtue
features:
- name: label
dtype: int32
- name: scenario
dtype: string
splits:
- name: train
num_bytes: 2605021
num_examples: 28245
- name: validation
num_bytes: 467254
num_examples: 4975
- name: test
num_bytes: 452491
num_examples: 4780
download_size: 3364070
dataset_size: 3524766
tags:
- AI Alignment
---
# Dataset Card for ETHICS
This is the data from [Aligning AI With Shared Human Values](https://arxiv.org/pdf/2008.02275) by Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt, published at ICLR 2021.
For more information, see the [Github Repo](https://github.com/hendrycks/ethics).
## Dataset Summary
This dataset provides ethics-based tasks for evaluating language models for AI alignment.
## Loading Data
To load this data, you can use HuggingFace datasets and the dataloader script.
```
from datasets import load_dataset
load_dataset("hendrycks/ethics", "commonsense")
```
Where `commonsense` is one of the following sections: commonsense, deontology, justice, utilitarianism, and virtue.
### Citation Information
```
@article{hendrycks2021ethics,
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
|
esnli | 2023-04-05T10:05:24.000Z | [
"language:en",
"region:us"
] | null | The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to
include human-annotated natural language explanations of the entailment
relations. | @incollection{NIPS2018_8163,
title = {e-SNLI: Natural Language Inference with Natural Language Explanations},
author = {Camburu, Oana-Maria and Rockt\"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},
booktitle = {Advances in Neural Information Processing Systems 31},
editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
pages = {9539--9549},
year = {2018},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}
} | null | 15 | 964 | ---
language:
- en
paperswithcode_id: e-snli
pretty_name: e-SNLI
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: explanation_1
dtype: string
- name: explanation_2
dtype: string
- name: explanation_3
dtype: string
config_name: plain_text
splits:
- name: test
num_bytes: 3387169
num_examples: 9824
- name: train
num_bytes: 108024142
num_examples: 549367
- name: validation
num_bytes: 3423725
num_examples: 9842
download_size: 204516010
dataset_size: 114835036
---
# Dataset Card for "esnli"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/OanaMariaCamburu/e-SNLI](https://github.com/OanaMariaCamburu/e-SNLI)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 204.51 MB
- **Size of the generated dataset:** 114.84 MB
- **Total amount of disk used:** 319.35 MB
### Dataset Summary
The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to
include human-annotated natural language explanations of the entailment
relations.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 204.51 MB
- **Size of the generated dataset:** 114.84 MB
- **Total amount of disk used:** 319.35 MB
An example of 'validation' looks as follows.
```
{
"explanation_1": "A woman must be present to smile.",
"explanation_2": "A woman smiling implies that she is present.",
"explanation_3": "A smiling woman is also present.",
"hypothesis": "A woman is present.",
"label": 0,
"premise": "A woman smiles at the child."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `explanation_1`: a `string` feature.
- `explanation_2`: a `string` feature.
- `explanation_3`: a `string` feature.
### Data Splits
| name |train |validation|test|
|----------|-----:|---------:|---:|
|plain_text|549367| 9842|9824|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@incollection{NIPS2018_8163,
title = {e-SNLI: Natural Language Inference with Natural Language Explanations},
author = {Camburu, Oana-Maria and Rockt"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},
booktitle = {Advances in Neural Information Processing Systems 31},
editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
pages = {9539--9549},
year = {2018},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
RikoteMaster/isear_for_llama2 | 2023-08-03T13:01:30.000Z | [
"region:us"
] | RikoteMaster | null | null | null | 0 | 960 | ---
dataset_info:
features:
- name: Text_processed
dtype: string
- name: Emotion
dtype: string
- name: Augmented
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 3715314
num_examples: 7499
- name: validation
num_bytes: 645323
num_examples: 1324
- name: test
num_bytes: 854222
num_examples: 1879
download_size: 567800
dataset_size: 5214859
---
# Dataset Card for "isear_for_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kyujinpy/KOpen-platypus | 2023-10-06T17:07:39.000Z | [
"size_categories:10K<n<100K",
"language:en",
"language:ko",
"license:cc-by-4.0",
"arxiv:2308.07317",
"region:us"
] | kyujinpy | null | null | null | 18 | 960 | ---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_examples: 24926
language:
- en
- ko
size_categories:
- 10K<n<100K
---
# KOpenPlatypus: Korean Translation dataset about Open-Platypus
## Korean Translation Method
I use [DeepL-pro-API](https://www.deepl.com/ko/pro/change-plan?cta=header-pro#single) and selenium.
It takes about 140h times.
## Korean Translation post-processing





And also, applying post-processing. See below lists. (*약 2000개 이상의 코드 관련 데이터를 수작업으로 수정함)
1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정
2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존
3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴
4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)
5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정
6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역
7. `고유명사`는 최대한 유지함
- 95% 이상의 번역 오류는 전부 고친 것으로 생각됨.
- 약 144h 정도 번역 작업을 진행함. (72h/72h; Translation/Post-processing)
## Introdcution
This dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%:
| Dataset Name | License Type |
|--------------------------------------------------------------|--------------|
| [PRM800K](https://github.com/openai/prm800k) | MIT |
| [ScienceQA](https://github.com/lupantech/ScienceQA) | [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
| [SciBench](https://github.com/mandyyyyii/scibench) | MIT |
| [ReClor](https://whyu.me/reclor/) | Non-commercial |
| [TheoremQA](https://huggingface.co/datasets/wenhu/TheoremQA) | MIT |
| [`nuprl/leetcode-solutions-python-testgen-gpt4`](https://huggingface.co/datasets/nuprl/leetcode-solutions-python-testgen-gpt4/viewer/nuprl--leetcode-solutions-python-testgen-gpt4/train?p=1) | None listed |
| [`jondurbin/airoboros-gpt4-1.4.1`](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1) | other |
| [`TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k`](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k/viewer/TigerResearch--tigerbot-kaggle-leetcodesolutions-en-2k/train?p=2) | apache-2.0 |
| [openbookQA](https://huggingface.co/datasets/openbookqa/viewer/additional/train?row=35) | apache-2.0 |
| [ARB](https://arb.duckai.org) | MIT |
| [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) | apache-2.0 |
## Data Contamination Check
We've removed approximately 200 questions that appear in the Hugging Face benchmark test sets. Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
## Model Info
Please see models at [`garage-bAInd`](https://huggingface.co/garage-bAInd).
## Training and filtering code
Please see the [Platypus GitHub repo](https://github.com/arielnlee/Platypus).
## Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@article{lightman2023lets,
title={Let's Verify Step by Step},
author={Lightman, Hunter and Kosaraju, Vineet and Burda, Yura and Edwards, Harri and Baker, Bowen and Lee, Teddy and Leike, Jan and Schulman, John and Sutskever, Ilya and Cobbe, Karl},
journal={preprint arXiv:2305.20050},
year={2023}
}
```
```bibtex
@inproceedings{lu2022learn,
title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
year={2022}
}
```
```bibtex
@misc{wang2023scibench,
title={SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models},
author={Xiaoxuan Wang and Ziniu Hu and Pan Lu and Yanqiao Zhu and Jieyu Zhang and Satyen Subramaniam and Arjun R. Loomba and Shichang Zhang and Yizhou Sun and Wei Wang},
year={2023},
arXiv eprint 2307.10635
}
```
```bibtex
@inproceedings{yu2020reclor,
author = {Yu, Weihao and Jiang, Zihang and Dong, Yanfei and Feng, Jiashi},
title = {ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning},
booktitle = {International Conference on Learning Representations (ICLR)},
month = {April},
year = {2020}
}
```
```bibtex
@article{chen2023theoremqa,
title={TheoremQA: A Theorem-driven Question Answering dataset},
author={Chen, Wenhu and Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, Pan Lu},
journal={preprint arXiv:2305.12524},
year={2023}
}
```
```bibtex
@inproceedings{OpenBookQA2018,
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
booktitle={EMNLP},
year={2018}
}
```
```bibtex
@misc{sawada2023arb,
title={ARB: Advanced Reasoning Benchmark for Large Language Models},
author={Tomohiro Sawada and Daniel Paleka and Alexander Havrilla and Pranav Tadepalli and Paula Vidas and Alexander Kranias and John J. Nay and Kshitij Gupta and Aran Komatsuzaki},
arXiv eprint 2307.13692,
year={2023}
}
``` |
ddrg/super_eurlex | 2023-09-05T15:48:37.000Z | [
"license:mit",
"region:us"
] | ddrg | Super-EURLEX dataset containing legal documents from multiple languages.
The datasets are build/scrapped from the EURLEX Website [https://eur-lex.europa.eu/homepage.html]
With one split per language and sector, because the available features (metadata) differs for each
sector. Therefore, each sample contains the content of a full legal document in up to 3 different
formats. Those are raw HTML and cleaned HTML (if the HTML format was available on the EURLEX website
during the scrapping process) and cleaned text.
The cleaned text should be available for each sample and was extracted from HTML or PDF.
'Cleaned' HTML stands here for minor cleaning that was done to preserve to a large extent the necessary
HTML information like table structures while removing unnecessary complexity which was introduced to the
original documents due to actions like writing each sentence into a new object.
Additionally, each sample contains metadata which was scrapped on the fly, this implies the following
2 things. First, not every sector contains the same metadata. Second, most metadata might be
irrelevant for most use cases.
In our minds the most interesting metadata is the celex-id which is used to identify the legal
document at hand, but also contains a lot of information about the document
see [https://eur-lex.europa.eu/content/tools/eur-lex-celex-infographic-A3.pdf] as well as eurovoc-
concepts, which are labels that define the content of the documents.
Eurovoc-Concepts are, for example, only available for the sectors 1, 2, 3, 4, 5, 6, 9, C, and E.
The Naming of most metadata is kept like it was on the eurlex website, except for converting
it to lower case and replacing whitespaces with '_'. | null | 0 | 957 | ---
license: mit
---
| |
PygmalionAI/PIPPA | 2023-09-07T03:07:55.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"not-for-all-audiences",
"conversational",
"roleplay",
"custom-format",
"a.",
"arxiv:2308.05884",
"region:us"
] | PygmalionAI | Personal Interaction Pairs between People and AI (PIPPA) is a partially synthetic, community contributed and open-source conversational and roleplaying dataset generated from a subset of submitted logs to the Pygmalion project. | @misc{gosling2023pippa,
title={PIPPA: A Partially Synthetic Conversational Dataset},
author={Tear Gosling and Alpin Dale and Yinhe Zheng},
year={2023},
eprint={2308.05884},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 96 | 953 | ---
license: apache-2.0
task_categories:
- conversational
language:
- en
tags:
- not-for-all-audiences
- conversational
- roleplay
- custom-format
- a.
pretty_name: PIPPA - Personal Interaction Pairs Between People and AI
size_categories:
- 10K<n<100K
viewer: false
---
# PIPPA - Personal Interaction Pairs between People and AI
It's been a long time coming, but we're proud to finally release the public portion of our conversational dataset to the public. **Personal Interaction Pairs between People and AI** (**PIPPA**) is a partially synthetic, community contributed and open-source conversational and roleplaying dataset generated from a subset of submitted logs to the Pygmalion project.
This dataset is a subset of what we have received - it consists only of the valid conversational logs in which the submitter gave consent to redistribute to the public. Furthermore, we have done our best to redact or modify any personal information that could potentially be found within PIPPA. If you have found something within PIPPA which has not been redacted properly, please contact us via. email at `teargosling@pygmalion.chat` or `alpindale@pygmalion.chat` and we'll take care of it for you. You may contact us for any other purpose as well, including yelling at us for when the next model will be released.
**⚠️ CAUTION: PIPPA contains conversations, themes and scenarios which can be considered "not safe for work" (NSFW) and/or heavily disturbing in nature. Models trained purely with PIPPA may have the tendency to generate X-rated output. You have been warned.**
## Dataset Summary
PIPPA consists of just a little more than 1 million lines of dialogue spread out over 26,000 conversations between users of the popular chatbot website "Character.AI" and its large language model, obtained through a large community effort taking place over the course of several months. Tallying shows that over 1,000 unique personas simulating both real and fictional characters are represented within the dataset, allowing PIPPA and LLMs fine-tuned on it to adapt to many different roleplay domains.
The dataset is represented with a JSONL file, with a singular JSON snippet representing one entire conversation. Every snippet contains the following pieces of data:
- `submission_timestamp`: The Unix timestamp of when this particular conversation was submitted to the project, in milliseconds.
- `categories`: The categories assigned to the character on the Character.AI website, if any were assigned. If no categories were assigned, it will be `null`
- `bot_id`: The unique ID assigned to the specific character which the user was conversing with on the website.
- `bot_name`: The name of the character.
- `bot_greeting`: The introductory line of the character to the user. This is always the first utterance of dialogue in a conversation.
- `bot_definitions`: Contains whatever was typed in the **Definitions** field in the character creator on the website. This usually consists of one or more example conversations between the user and the character designed to steer the model towards emulating the persona correctly. Bot definitions required a separate effort to gather, and thus may not be present for a specific persona - if this is the case, an empty string is provided. Because the defintions were written on Character.AI, this field usually follows Character.AI's unique formatting and should be preprocessed before feeding into any model - please see **Appendix A** of the paper for further details.
- `bot_description`: Contains whatever was typed in the **Description** field in the character creator on the website. It usually consists of a few sentences which gives a brief overview of the character and any important details about them.
- `conversation`: The conversation between the user and the model. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing two key-value pairs: `message`, referring to the utterance itself and `is_human`, which designates whether the dialogue was generated by the user or the LLM.
For further information about PIPPA, please refer to our [published paper](https://arxiv.org/abs/2308.05884) or contact us at the emails listed above.
## Files
We publish PIPPA in multiple variants, each a singular JSONL file:
- **pippa.jsonl**: The original dataset, almost exactly as submitted to us (barring any modifications resulting from the redaction of personally identifiable information).
- **pippa_deduped.jsonl**: The 'cleaned' version of PIPPA, with duplicate conversations as well as any conversation with less than three turns removed from the dataset. **We recommend using this file.**
- **pippa_metharme.jsonl**: A version of deduped PIPPA which is formatted in a similar way to our [Metharme instructional models](https://huggingface.co/PygmalionAI/metharme-13b), useful as an example to demonstrate how to properly format the PIPPA dataset.
If you are using HuggingFace's `datasets` library, you can choose the file you wish to use by specifying the name of it (without extension) as an argument, like so: `dataset = load_dataset("PygmalionAI/PIPPA", 'pippa_deduped')`. The default value is `pippa_deduped`.
Thank you for your patience, everyone!
## Citation
If you're using our dataset, please consider citing our work:
```bibtex
@misc{gosling2023pippa,
title={PIPPA: A Partially Synthetic Conversational Dataset},
author={Tear Gosling and Alpin Dale and Yinhe Zheng},
year={2023},
eprint={2308.05884},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
___
Any relationship between the name of this dataset and any public personas is entirely and totally coincidential. |
miracl/miracl-corpus | 2023-01-05T17:28:26.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"language:bn",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:hi",
"language:id",
"language:ja",
"language:ko",
"language:ru",
"language:sw",
"language:te",
"language:th",
"language:zh",
"license:apache-2.0",
"arxiv:2210.09984",
"region:us"
] | miracl | null | null | null | 12 | 950 | ---
annotations_creators:
- expert-generated
language:
- ar
- bn
- en
- es
- fa
- fi
- fr
- hi
- id
- ja
- ko
- ru
- sw
- te
- th
- zh
multilinguality:
- multilingual
pretty_name: MIRACL-corpus
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Dataset Card for MIRACL Corpus
## Dataset Description
* **Homepage:** http://miracl.ai
* **Repository:** https://github.com/project-miracl/miracl
* **Paper:** https://arxiv.org/abs/2210.09984
MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
This dataset contains the collection data of the 16 "known languages". The remaining 2 "surprise languages" will not be released until later.
The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Dataset Structure
Each retrieval unit contains three fields: `docid`, `title`, and `text`. Consider an example from the English corpus:
```
{
"docid": "39#0",
"title": "Albedo",
"text": "Albedo (meaning 'whiteness') is the measure of the diffuse reflection of solar radiation out of the total solar radiation received by an astronomical body (e.g. a planet like Earth). It is dimensionless and measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation)."
}
```
The `docid` has the schema `X#Y`, where all passages with the same `X` come from the same Wikipedia article, whereas `Y` denotes the passage within that article, numbered sequentially. The text field contains the text of the passage. The title field contains the name of the article the passage comes from.
The collection can be loaded using:
```
lang='ar' # or any of the 16 languages
miracl_corpus = datasets.load_dataset('miracl/miracl-corpus', lang)['train']
for doc in miracl_corpus:
docid = doc['docid']
title = doc['title']
text = doc['text']
```
## Dataset Statistics and Links
The following table contains the number of passage and Wikipedia articles in the collection of each language, along with the links to the datasets and raw Wikipedia dumps.
| Language | # of Passages | # of Articles | Links | Raw Wiki Dump |
|:----------------|--------------:|--------------:|:------|:------|
| Arabic (ar) | 2,061,414 | 656,982 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ar) | [🌏](https://archive.org/download/arwiki-20190201/arwiki-20190201-pages-articles-multistream.xml.bz2)
| Bengali (bn) | 297,265 | 63,762 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-bn) | [🌏](https://archive.org/download/bnwiki-20190201/bnwiki-20190201-pages-articles-multistream.xml.bz2)
| English (en) | 32,893,221 | 5,758,285 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-en) | [🌏](https://archive.org/download/enwiki-20190201/enwiki-20190201-pages-articles-multistream.xml.bz2)
| Spanish (es) | 10,373,953 | 1,669,181 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-es) | [🌏](https://archive.org/download/eswiki-20220301/eswiki-20220301-pages-articles-multistream.xml.bz2)
| Persian (fa) | 2,207,172 | 857,827 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-fa) | [🌏](https://archive.org/download/fawiki-20220301/fawiki-20220301-pages-articles-multistream.xml.bz2)
| Finnish (fi) | 1,883,509 | 447,815 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-fi) | [🌏](https://archive.org/download/fiwiki-20190201/fiwiki-20190201-pages-articles-multistream.xml.bz2)
| French (fr) | 14,636,953 | 2,325,608 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-fr) | [🌏](https://archive.org/download/frwiki-20220301/frwiki-20220301-pages-articles-multistream.xml.bz2)
| Hindi (hi) | 506,264 | 148,107 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-hi) | [🌏](https://archive.org/download/hiwiki-20220301/hiwiki-20220301-pages-articles-multistream.xml.bz2)
| Indonesian (id) | 1,446,315 | 446,330 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-id) | [🌏](https://archive.org/download/idwiki-20190201/idwiki-20190201-pages-articles-multistream.xml.bz2)
| Japanese (ja) | 6,953,614 | 1,133,444 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ja) | [🌏](https://archive.org/download/jawiki-20190201/jawiki-20190201-pages-articles-multistream.xml.bz2)
| Korean (ko) | 1,486,752 | 437,373 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ko) | [🌏](https://archive.org/download/kowiki-20190201/kowiki-20190201-pages-articles-multistream.xml.bz2)
| Russian (ru) | 9,543,918 | 1,476,045 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ru) | [🌏](https://archive.org/download/ruwiki-20190201/ruwiki-20190201-pages-articles-multistream.xml.bz2)
| Swahili (sw) | 131,924 | 47,793 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-sw) | [🌏](https://archive.org/download/swwiki-20190201/swwiki-20190201-pages-articles-multistream.xml.bz2)
| Telugu (te) | 518,079 | 66,353 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-te) | [🌏](https://archive.org/download/tewiki-20190201/tewiki-20190201-pages-articles-multistream.xml.bz2)
| Thai (th) | 542,166 | 128,179 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-th) | [🌏](https://archive.org/download/thwiki-20190101/thwiki-20190101-pages-articles-multistream.xml.bz2)
| Chinese (zh) | 4,934,368 | 1,246,389 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-zh) | [🌏](https://archive.org/download/zhwiki-20220301/zhwiki-20220301-pages-articles-multistream.xml.bz2)
|
nlphuji/flickr_1k_test_image_text_retrieval | 2023-01-14T19:54:08.000Z | [
"region:us"
] | nlphuji | null | null | null | 0 | 950 | # Flickr30k (1K test set)
Original paper: [From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions](https://aclanthology.org/Q14-1006)
Homepage: https://shannon.cs.illinois.edu/DenotationGraph/
1K test set split from: http://cs.stanford.edu/people/karpathy/deepimagesent/caption_datasets.zip
Bibtex:
```
@article{young2014image,
title={From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions},
author={Young, Peter and Lai, Alice and Hodosh, Micah and Hockenmaier, Julia},
journal={Transactions of the Association for Computational Linguistics},
volume={2},
pages={67--78},
year={2014},
publisher={MIT Press}
}
``` |
wmt18 | 2023-04-05T13:44:00.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:10M<n<100M",
"source_datasets:extended|europarl_bilingual",
"source_datasets:extended|news_commentary",
"source_datasets:extended|opus_paracrawl",
"source_datasets:extended|setimes",
"source_datasets:extended|un_multi",
"language:cs",
"language:de",
"language:en",
"language:et",
"language:fi",
"language:kk",
"language:ru",
"language:tr",
"language:zh",
"license:unknown",
"region:us"
] | null | null | @InProceedings{bojar-EtAl:2018:WMT1,
author = {Bojar, Ond\v{r}ej and Federmann, Christian and Fishel, Mark
and Graham, Yvette and Haddow, Barry and Huck, Matthias and
Koehn, Philipp and Monz, Christof},
title = {Findings of the 2018 Conference on Machine Translation (WMT18)},
booktitle = {Proceedings of the Third Conference on Machine Translation,
Volume 2: Shared Task Papers},
month = {October},
year = {2018},
address = {Belgium, Brussels},
publisher = {Association for Computational Linguistics},
pages = {272--307},
url = {http://www.aclweb.org/anthology/W18-6401}
} | null | 3 | 943 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- cs
- de
- en
- et
- fi
- kk
- ru
- tr
- zh
license:
- unknown
multilinguality:
- translation
size_categories:
- 10M<n<100M
source_datasets:
- extended|europarl_bilingual
- extended|news_commentary
- extended|opus_paracrawl
- extended|setimes
- extended|un_multi
task_categories:
- translation
task_ids: []
pretty_name: WMT18
paperswithcode_id: wmt-2018
dataset_info:
- config_name: cs-en
features:
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 1461016186
num_examples: 11046024
- name: validation
num_bytes: 674430
num_examples: 3005
- name: test
num_bytes: 696229
num_examples: 2983
download_size: 2030359086
dataset_size: 1462386845
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 8187552108
num_examples: 42271874
- name: validation
num_bytes: 729519
num_examples: 3004
- name: test
num_bytes: 757649
num_examples: 2998
download_size: 3808612335
dataset_size: 8189039276
- config_name: et-en
features:
- name: translation
dtype:
translation:
languages:
- et
- en
splits:
- name: train
num_bytes: 647992667
num_examples: 2175873
- name: validation
num_bytes: 459398
num_examples: 2000
- name: test
num_bytes: 489394
num_examples: 2000
download_size: 524534404
dataset_size: 648941459
- config_name: fi-en
features:
- name: translation
dtype:
translation:
languages:
- fi
- en
splits:
- name: train
num_bytes: 857171881
num_examples: 3280600
- name: validation
num_bytes: 1388828
num_examples: 6004
- name: test
num_bytes: 691841
num_examples: 3000
download_size: 491874780
dataset_size: 859252550
- config_name: kk-en
features:
- name: translation
dtype:
translation:
languages:
- kk
- en
splits:
- name: train
- name: validation
- name: test
download_size: 0
dataset_size: 0
- config_name: ru-en
features:
- name: translation
dtype:
translation:
languages:
- ru
- en
splits:
- name: train
num_bytes: 13665367647
num_examples: 36858512
- name: validation
num_bytes: 1040195
num_examples: 3001
- name: test
num_bytes: 1085596
num_examples: 3000
download_size: 4195144356
dataset_size: 13667493438
- config_name: tr-en
features:
- name: translation
dtype:
translation:
languages:
- tr
- en
splits:
- name: train
num_bytes: 60416617
num_examples: 205756
- name: validation
num_bytes: 752773
num_examples: 3007
- name: test
num_bytes: 770313
num_examples: 3000
download_size: 62263061
dataset_size: 61939703
- config_name: zh-en
features:
- name: translation
dtype:
translation:
languages:
- zh
- en
splits:
- name: train
num_bytes: 5536169801
num_examples: 25160346
- name: validation
num_bytes: 540347
num_examples: 2001
- name: test
num_bytes: 1107522
num_examples: 3981
download_size: 2259428767
dataset_size: 5537817670
---
# Dataset Card for "wmt18"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.statmt.org/wmt18/translation-task.html](http://www.statmt.org/wmt18/translation-task.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.03 GB
- **Size of the generated dataset:** 1.46 GB
- **Total amount of disk used:** 3.49 GB
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Warning:</b> There are issues with the Common Crawl corpus data (<a href="https://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz">training-parallel-commoncrawl.tgz</a>):</p>
<ul>
<li>Non-English files contain many English sentences.</li>
<li>Their "parallel" sentences in English are not aligned: they are uncorrelated with their counterpart.</li>
</ul>
<p>We have contacted the WMT organizers.</p>
</div>
Translation dataset based on the data from statmt.org.
Versions exist for different years using a combination of data
sources. The base `wmt` allows you to create a custom dataset by choosing
your own data/language pair. This can be done as follows:
```python
from datasets import inspect_dataset, load_dataset_builder
inspect_dataset("wmt18", "path/to/scripts")
builder = load_dataset_builder(
"path/to/scripts/wmt_utils.py",
language_pair=("fr", "de"),
subsets={
datasets.Split.TRAIN: ["commoncrawl_frde"],
datasets.Split.VALIDATION: ["euelections_dev2019"],
},
)
# Standard version
builder.download_and_prepare()
ds = builder.as_dataset()
# Streamable version
ds = builder.as_streaming_dataset()
```
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### cs-en
- **Size of downloaded dataset files:** 2.03 GB
- **Size of the generated dataset:** 1.46 GB
- **Total amount of disk used:** 3.49 GB
An example of 'validation' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### cs-en
- `translation`: a multilingual `string` variable, with possible languages including `cs`, `en`.
### Data Splits
|name | train |validation|test|
|-----|-------:|---------:|---:|
|cs-en|11046024| 3005|2983|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{bojar-EtAl:2018:WMT1,
author = {Bojar, Ond
{r}ej and Federmann, Christian and Fishel, Mark
and Graham, Yvette and Haddow, Barry and Huck, Matthias and
Koehn, Philipp and Monz, Christof},
title = {Findings of the 2018 Conference on Machine Translation (WMT18)},
booktitle = {Proceedings of the Third Conference on Machine Translation,
Volume 2: Shared Task Papers},
month = {October},
year = {2018},
address = {Belgium, Brussels},
publisher = {Association for Computational Linguistics},
pages = {272--307},
url = {http://www.aclweb.org/anthology/W18-6401}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
SetFit/sst2 | 2021-12-25T06:16:15.000Z | [
"region:us"
] | SetFit | null | null | null | 3 | 943 | # Stanford Sentiment Treebank - Binary
[Stanford Sentiment Treebank](http://nlp.stanford.edu/sentiment/) with 2 labels: negative, positive
Splits are from:
[https://github.com/AcademiaSinicaNLPLab/sentiment_dataset/tree/master/data](https://github.com/AcademiaSinicaNLPLab/sentiment_dataset/tree/master/data)
Training data is on sentence level, not on phrase level!
|
news_commentary | 2022-11-03T16:47:41.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:ja",
"language:nl",
"language:pt",
"language:ru",
"language:zh",
"license:unknown",
"region:us"
] | null | A parallel corpus of News Commentaries provided by WMT for training SMT. The source is taken from CASMACAT: http://www.casmacat.eu/corpus/news-commentary.html
12 languages, 63 bitexts
total number of files: 61,928
total number of tokens: 49.66M
total number of sentence fragments: 1.93M | @InProceedings{TIEDEMANN12.463,
author = {J�rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | null | 21 | 939 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- cs
- de
- en
- es
- fr
- it
- ja
- nl
- pt
- ru
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: NewsCommentary
dataset_info:
- config_name: ar-cs
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- cs
splits:
- name: train
num_bytes: 51546460
num_examples: 52128
download_size: 16242918
dataset_size: 51546460
- config_name: ar-de
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- de
splits:
- name: train
num_bytes: 69681419
num_examples: 68916
download_size: 21446768
dataset_size: 69681419
- config_name: cs-de
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- de
splits:
- name: train
num_bytes: 57470799
num_examples: 172706
download_size: 21623462
dataset_size: 57470799
- config_name: ar-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 80655273
num_examples: 83187
download_size: 24714354
dataset_size: 80655273
- config_name: cs-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 54487874
num_examples: 177278
download_size: 20636368
dataset_size: 54487874
- config_name: de-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 73085451
num_examples: 223153
download_size: 26694093
dataset_size: 73085451
- config_name: ar-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- es
splits:
- name: train
num_bytes: 79255985
num_examples: 78074
download_size: 24027435
dataset_size: 79255985
- config_name: cs-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- es
splits:
- name: train
num_bytes: 56794825
num_examples: 170489
download_size: 20994380
dataset_size: 56794825
- config_name: de-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 74708740
num_examples: 209839
download_size: 26653320
dataset_size: 74708740
- config_name: en-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 78600789
num_examples: 238872
download_size: 28106064
dataset_size: 78600789
- config_name: ar-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- fr
splits:
- name: train
num_bytes: 71035061
num_examples: 69157
download_size: 21465481
dataset_size: 71035061
- config_name: cs-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- fr
splits:
- name: train
num_bytes: 50364837
num_examples: 148578
download_size: 18483528
dataset_size: 50364837
- config_name: de-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 67083899
num_examples: 185442
download_size: 23779967
dataset_size: 67083899
- config_name: en-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 70340014
num_examples: 209479
download_size: 24982452
dataset_size: 70340014
- config_name: es-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 71025933
num_examples: 195241
download_size: 24693126
dataset_size: 71025933
- config_name: ar-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- it
splits:
- name: train
num_bytes: 17413450
num_examples: 17227
download_size: 5186438
dataset_size: 17413450
- config_name: cs-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- it
splits:
- name: train
num_bytes: 10441845
num_examples: 30547
download_size: 3813656
dataset_size: 10441845
- config_name: de-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- it
splits:
- name: train
num_bytes: 13993454
num_examples: 38961
download_size: 4933419
dataset_size: 13993454
- config_name: en-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 14213972
num_examples: 40009
download_size: 4960768
dataset_size: 14213972
- config_name: es-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- it
splits:
- name: train
num_bytes: 15139636
num_examples: 41497
download_size: 5215173
dataset_size: 15139636
- config_name: fr-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- it
splits:
- name: train
num_bytes: 14216079
num_examples: 38485
download_size: 4867267
dataset_size: 14216079
- config_name: ar-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- ja
splits:
- name: train
num_bytes: 661992
num_examples: 569
download_size: 206664
dataset_size: 661992
- config_name: cs-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- ja
splits:
- name: train
num_bytes: 487902
num_examples: 622
download_size: 184374
dataset_size: 487902
- config_name: de-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- ja
splits:
- name: train
num_bytes: 465575
num_examples: 582
download_size: 171371
dataset_size: 465575
- config_name: en-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 485484
num_examples: 637
download_size: 178451
dataset_size: 485484
- config_name: es-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- ja
splits:
- name: train
num_bytes: 484463
num_examples: 602
download_size: 175281
dataset_size: 484463
- config_name: fr-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- ja
splits:
- name: train
num_bytes: 418188
num_examples: 519
download_size: 151400
dataset_size: 418188
- config_name: ar-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- nl
splits:
- name: train
num_bytes: 9054134
num_examples: 9047
download_size: 2765542
dataset_size: 9054134
- config_name: cs-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- nl
splits:
- name: train
num_bytes: 5860976
num_examples: 17358
download_size: 2174494
dataset_size: 5860976
- config_name: de-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- nl
splits:
- name: train
num_bytes: 7645565
num_examples: 21439
download_size: 2757414
dataset_size: 7645565
- config_name: en-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 7316599
num_examples: 19399
download_size: 2575916
dataset_size: 7316599
- config_name: es-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- nl
splits:
- name: train
num_bytes: 7560123
num_examples: 21012
download_size: 2674557
dataset_size: 7560123
- config_name: fr-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- nl
splits:
- name: train
num_bytes: 7603503
num_examples: 20898
download_size: 2659946
dataset_size: 7603503
- config_name: it-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 5380912
num_examples: 15428
download_size: 1899094
dataset_size: 5380912
- config_name: ar-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- pt
splits:
- name: train
num_bytes: 11340074
num_examples: 11433
download_size: 3504173
dataset_size: 11340074
- config_name: cs-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- pt
splits:
- name: train
num_bytes: 6183725
num_examples: 18356
download_size: 2310039
dataset_size: 6183725
- config_name: de-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- pt
splits:
- name: train
num_bytes: 7699083
num_examples: 21884
download_size: 2794173
dataset_size: 7699083
- config_name: en-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 9238819
num_examples: 25929
download_size: 3310748
dataset_size: 9238819
- config_name: es-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: train
num_bytes: 9195685
num_examples: 25551
download_size: 3278814
dataset_size: 9195685
- config_name: fr-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- pt
splits:
- name: train
num_bytes: 9261169
num_examples: 25642
download_size: 3254925
dataset_size: 9261169
- config_name: it-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- pt
splits:
- name: train
num_bytes: 3988570
num_examples: 11407
download_size: 1397344
dataset_size: 3988570
- config_name: nl-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- pt
splits:
- name: train
num_bytes: 3612339
num_examples: 10598
download_size: 1290715
dataset_size: 3612339
- config_name: ar-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- ru
splits:
- name: train
num_bytes: 105804303
num_examples: 84455
download_size: 28643600
dataset_size: 105804303
- config_name: cs-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- ru
splits:
- name: train
num_bytes: 71185695
num_examples: 161133
download_size: 21917168
dataset_size: 71185695
- config_name: de-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: train
num_bytes: 81812014
num_examples: 175905
download_size: 24610973
dataset_size: 81812014
- config_name: en-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 83282480
num_examples: 190104
download_size: 24849511
dataset_size: 83282480
- config_name: es-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 84345850
num_examples: 180217
download_size: 24883942
dataset_size: 84345850
- config_name: fr-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 75967253
num_examples: 160740
download_size: 22385777
dataset_size: 75967253
- config_name: it-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- ru
splits:
- name: train
num_bytes: 12915073
num_examples: 27267
download_size: 3781318
dataset_size: 12915073
- config_name: ja-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ja
- ru
splits:
- name: train
num_bytes: 596166
num_examples: 586
download_size: 184791
dataset_size: 596166
- config_name: nl-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- ru
splits:
- name: train
num_bytes: 8933805
num_examples: 19112
download_size: 2662250
dataset_size: 8933805
- config_name: pt-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- pt
- ru
splits:
- name: train
num_bytes: 8645475
num_examples: 18458
download_size: 2584012
dataset_size: 8645475
- config_name: ar-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- zh
splits:
- name: train
num_bytes: 65483204
num_examples: 66021
download_size: 21625859
dataset_size: 65483204
- config_name: cs-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- zh
splits:
- name: train
num_bytes: 29971192
num_examples: 45424
download_size: 12495392
dataset_size: 29971192
- config_name: de-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- zh
splits:
- name: train
num_bytes: 39044704
num_examples: 59020
download_size: 15773631
dataset_size: 39044704
- config_name: en-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 44596087
num_examples: 69206
download_size: 18101984
dataset_size: 44596087
- config_name: es-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- zh
splits:
- name: train
num_bytes: 43940013
num_examples: 65424
download_size: 17424938
dataset_size: 43940013
- config_name: fr-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- zh
splits:
- name: train
num_bytes: 40144071
num_examples: 59060
download_size: 15817862
dataset_size: 40144071
- config_name: it-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- zh
splits:
- name: train
num_bytes: 9676756
num_examples: 14652
download_size: 3799012
dataset_size: 9676756
- config_name: ja-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ja
- zh
splits:
- name: train
num_bytes: 462685
num_examples: 570
download_size: 181924
dataset_size: 462685
- config_name: nl-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- zh
splits:
- name: train
num_bytes: 5509070
num_examples: 8433
download_size: 2218937
dataset_size: 5509070
- config_name: pt-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- pt
- zh
splits:
- name: train
num_bytes: 7152774
num_examples: 10873
download_size: 2889296
dataset_size: 7152774
- config_name: ru-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ru
- zh
splits:
- name: train
num_bytes: 43112824
num_examples: 47687
download_size: 14225498
dataset_size: 43112824
---
# Dataset Card for NewsCommentary
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/News-Commentary.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
SetFit/enron_spam | 2022-01-16T18:12:43.000Z | [
"region:us"
] | SetFit | null | null | null | 7 | 939 | This is a version of the [Enron Spam Email Dataset](https://github.com/MWiechmann/enron_spam_data), containing emails (subject + message) and a label whether it is spam or ham. |
Matthijs/snacks | 2022-04-12T14:26:59.000Z | [
"task_categories:image-classification",
"license:cc-by-4.0",
"region:us"
] | Matthijs | null | @article{OpenImages2,
title={OpenImages: A public dataset for large-scale multi-label and multi-class image classification.},
author={Krasin, Ivan and Duerig, Tom and Alldrin, Neil and Ferrari, Vittorio and Abu-El-Haija, Sami and Kuznetsova, Alina and Rom, Hassan and Uijlings, Jasper and Popov, Stefan and Kamali, Shahab and Malloci, Matteo and Pont-Tuset, Jordi and Veit, Andreas and Belongie, Serge and Gomes, Victor and Gupta, Abhinav and Sun, Chen and Chechik, Gal and Cai, David and Feng, Zheyun and Narayanan, Dhyanesh and Murphy, Kevin},
journal={Dataset available from https://storage.googleapis.com/openimages/web/index.html},
year={2017}
} | null | 6 | 938 | ---
pretty_name: Snacks
task_categories:
- image-classification
- computer-vision
license: cc-by-4.0
---
# Dataset Card for Snacks
## Dataset Summary
This is a dataset of 20 different types of snack foods that accompanies the book [Machine Learning by Tutorials](https://www.raywenderlich.com/books/machine-learning-by-tutorials/v2.0).
The images were taken from the [Google Open Images dataset](https://storage.googleapis.com/openimages/web/index.html), release 2017_11.
## Dataset Structure
Number of images in the train/validation/test splits:
```nohighlight
train 4838
val 955
test 952
total 6745
```
Total images in each category:
```nohighlight
apple 350
banana 350
cake 349
candy 349
carrot 349
cookie 349
doughnut 350
grape 350
hot dog 350
ice cream 350
juice 350
muffin 348
orange 349
pineapple 340
popcorn 260
pretzel 204
salad 350
strawberry 348
waffle 350
watermelon 350
```
To save space in the download, the images were resized so that their smallest side is 256 pixels. All EXIF information was removed.
### Data Splits
Train, Test, Validation
## Licensing Information
Just like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license.
The images are listed as having a [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) license.
The annotations are licensed by Google Inc. under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
The **credits.csv** file contains the original URL, author information and license for each image.
|
Multimodal-Fatima/VizWiz | 2023-03-07T01:26:12.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 1 | 935 | Entry not found |
oscar-corpus/OSCAR-2201 | 2023-05-30T07:48:15.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:sq",
"language:am",
"language:ar",
"language:an",
"language:hy",
"language:as",
"language:ast",
"language:av",
"language:az",
"language:bn",
"language:ba",
"language:eu",
"language:be",
"language:bh",
"language:bpy",
"language:bs",
"language:br",
"language:bg",
"language:my",
"language:ca",
"language:ceb",
"language:ckb",
"language:ce",
"language:zh",
"language:cv",
"language:kw",
"language:hr",
"language:cs",
"language:da",
"language:diq",
"language:dv",
"language:nl",
"language:mhr",
"language:arz",
"language:en",
"language:eo",
"language:et",
"language:tl",
"language:fi",
"language:fr",
"language:gl",
"language:ka",
"language:de",
"language:gom",
"language:el",
"language:gn",
"language:gu",
"language:he",
"language:hi",
"language:hu",
"language:is",
"language:io",
"language:ilo",
"language:id",
"language:ia",
"language:ga",
"language:it",
"language:ja",
"language:jv",
"language:xal",
"language:kn",
"language:krc",
"language:kk",
"language:km",
"language:kv",
"language:ko",
"language:ku",
"language:ky",
"language:lo",
"language:la",
"language:lv",
"language:lez",
"language:li",
"language:lt",
"language:jbo",
"language:lmo",
"language:nds",
"language:dsb",
"language:lb",
"language:mk",
"language:mai",
"language:mg",
"language:ms",
"language:ml",
"language:mt",
"language:mr",
"language:mzn",
"language:min",
"language:xmf",
"language:mn",
"language:nah",
"language:ne",
"language:new",
"language:no",
"language:nn",
"language:oc",
"language:or",
"language:os",
"language:ps",
"language:fa",
"language:pms",
"language:pl",
"language:pt",
"language:pa",
"language:qu",
"language:ro",
"language:bxr",
"language:ru",
"language:sah",
"language:sa",
"language:gd",
"language:sr",
"language:sh",
"language:scn",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:azb",
"language:es",
"language:su",
"language:sw",
"language:sv",
"language:tg",
"language:ta",
"language:tt",
"language:te",
"language:th",
"language:bo",
"language:als",
"language:tr",
"language:tk",
"language:uk",
"language:eml",
"language:hsb",
"language:ur",
"language:ug",
"language:uz",
"language:vi",
"language:vo",
"language:wa",
"language:war",
"language:cy",
"language:fy",
"language:mrj",
"language:pnb",
"language:wuu",
"language:yi",
"language:yo",
"language:mul",
"license:cc0-1.0",
"arxiv:2010.14571",
"arxiv:2201.06642",
"arxiv:2103.12028",
"region:us"
] | oscar-corpus | The Open Super-large Crawled Aggregated coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the Ungoliant architecture.\ |
@ARTICLE{2022arXiv220106642A,
author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t},
title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = jan,
eid = {arXiv:2201.06642},
pages = {arXiv:2201.06642},
archivePrefix = {arXiv},
eprint = {2201.06642},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@article{caswell-etal-2021-quality,
author = {{Caswell}, Isaac and {Kreutzer}, Julia and {Wang}, Lisa and {Wahab}, Ahsan and {van Esch}, Daan and {Ulzii-Orshikh}, Nasanbayar and {Tapo}, Allahsera and {Subramani}, Nishant and {Sokolov}, Artem and {Sikasote}, Claytone and {Setyawan}, Monang and {Sarin}, Supheakmungkol and {Samb}, Sokhar and {Sagot}, Beno{\^\i}t and {Rivera}, Clara and {Rios}, Annette and {Papadimitriou}, Isabel and {Osei}, Salomey and {Ortiz Su{\'a}rez}, Pedro Javier and {Orife}, Iroro and {Ogueji}, Kelechi and {Niyongabo}, Rubungo Andre and {Nguyen}, Toan Q. and {M{\"u}ller}, Mathias and {M{\"u}ller}, Andr{\'e} and {Hassan Muhammad}, Shamsuddeen and {Muhammad}, Nanda and {Mnyakeni}, Ayanda and {Mirzakhalov}, Jamshidbek and {Matangira}, Tapiwanashe and {Leong}, Colin and {Lawson}, Nze and {Kudugunta}, Sneha and {Jernite}, Yacine and {Jenny}, Mathias and {Firat}, Orhan and {Dossou}, Bonaventure F.~P. and {Dlamini}, Sakhile and {de Silva}, Nisansa and {{\c{C}}abuk Ball{\i}}, Sakine and {Biderman}, Stella and {Battisti}, Alessia and {Baruwa}, Ahmed and {Bapna}, Ankur and {Baljekar}, Pallavi and {Abebe Azime}, Israel and {Awokoya}, Ayodele and {Ataman}, Duygu and {Ahia}, Orevaoghene and {Ahia}, Oghenefego and {Agrawal}, Sweta and {Adeyemi}, Mofetoluwa},
title = "{Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Computer Science - Artificial Intelligence},
year = 2021,
month = mar,
eid = {arXiv:2103.12028},
pages = {arXiv:2103.12028},
archivePrefix = {arXiv},
eprint = {2103.12028},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210312028C},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{\'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{\'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{\"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
} | null | 72 | 934 | ---
pretty_name: OSCAR
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- sq
- am
- ar
- an
- hy
- as
- ast
- av
- az
- bn
- ba
- eu
- be
- bh
- bpy
- bs
- br
- bg
- my
- ca
- ceb
- ckb
- ce
- zh
- cv
- kw
- hr
- cs
- da
- diq
- dv
- nl
- mhr
- arz
- en
- eo
- et
- tl
- fi
- fr
- gl
- ka
- de
- gom
- el
- gn
- gu
- he
- hi
- hu
- is
- io
- ilo
- id
- ia
- ga
- it
- ja
- jv
- xal
- kn
- krc
- kk
- km
- kv
- ko
- ku
- ky
- lo
- la
- lv
- lez
- li
- lt
- jbo
- lmo
- nds
- dsb
- lb
- mk
- mai
- mg
- ms
- ml
- mt
- mr
- mzn
- min
- xmf
- mn
- nah
- ne
- new
- no
- nn
- oc
- or
- os
- ps
- fa
- pms
- pl
- pt
- pa
- qu
- ro
- bxr
- ru
- sah
- sa
- gd
- sr
- sh
- scn
- sd
- si
- sk
- sl
- so
- azb
- es
- su
- sw
- sv
- tg
- ta
- tt
- te
- th
- bo
- als
- tr
- tk
- uk
- eml
- hsb
- ur
- ug
- uz
- vi
- vo
- wa
- war
- cy
- fy
- mrj
- pnb
- wuu
- yi
- yo
- mul
license:
- cc0-1.0
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
---
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [github.com/oscar-corpus/corpus](https://github.com/oscar-corpus/corpus)
- **Paper:** [Towards a Cleaner Document-Oriented Multilingual Crawled Corpus](https://oscar-corpus.com/publication/2022/arxiv/towards/)
- **Point of Contact:** [Contact](https://oscar-corpus.com/#contact)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [ungoliant](https://github.com/oscar-corpus/ungoliant) architecture. Data is distributed by language in both original and deduplicated form.
**We are aware of the virus warnings issue. See discussion [here](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/discussions/12) for more info!**
### Usage
```py
from datasets import load_dataset
dataset = load_dataset("oscar-corpus/OSCAR-2201",
use_auth_token=True, # required
language="ar",
streaming=True, # optional
split="train") # optional, but the dataset only has a train split
for d in dataset:
print(d) # prints documents
```
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word representations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
### Issues
OSCAR 22.01 may have quality issues on low size subcorpora, as it has been the case before.
Note that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus.
As an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic.
**If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.**
|Language code|Language|Issues|
|-------------|--------|------|
| | | |
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Data Instances
TODO
### Data Fields
* `id`: a `int64` feature.
* `content`: `string` Newline-separated content
* `warc_headers`: WARC Headers
* `warc_headers.content-length`: `int64` Content length (in bytes) **before** cleaning
* `warc_headers.content-type`: `string` MIME type
* `warc_headers.warc-block-digest`:`string` Algorithm name and calculated value of a digest applied to the full block of the record
* `warc_headers.warc-date`: `string` Crawl date (YYYY-MM-DDThh:mm:ssZ)
* `warc_headers.warc-identified-content-language`: `string` Comma-separated list of language identifications done by CommonCrawl (uses CLD3)
* `warc_headers.warc-record-id`: `string` Record ID
* `warc_headers.warc-refers-to`: `string` Record-ID of a single record for which the present record holds additional content
* `warc_headers.warc-target-uri`: `string` URI from where the content has been fetched
* `warc_headers.warc-type`: `string` Type of the WARC Record
* `metadata`: Metadata
* `metadata.identification.label`: `string` Language identification of the document
* `metadata.identification.prob`: `float` Confidence of the identification
* `metadata.annotation`: `[string]` Annnotations of the document. `null` if none present. (Is `None` if using `datasets`)
* `metadata.sentence_identifications`: `[string]` List of line identifications. `null`/`None` can be present for lines that failed the identification step.
* `meta.offset`: `int64` line offset where the related text begins. Should be used with `meta.nb_sentences` when reading the source files rather than using iterators to get related data.
* `text`: `string` content
See the [WARC Format standard](https://iipc.github.io/warc-specifications/specifications/warc-format/warc-1.1/#warc-type-mandatory) for more details on the `warc_headers` fields, and our [website](https://oscar-corpus.com/post/oscar-v22-01/) for more details about the format in general.
### Data Splits
<details>
<summary>Click to expand the number of samples per configuration</summary>
</details>
## Table
| lang | size | docs | words |
|:----------------------------|:----------|:------------|:----------------|
| _Multilingual_ | 12.1 GB | 1,210,685 | 936,187,711 |
| Afrikaans | 47.0 MB | 12,393 | 6,227,310 |
| Albanian | 3.0 GB | 437,287 | 326,325,149 |
| Alemannic / Swiss German | 363.6 kB | 139 | 37,381 |
| Amharic | 461.0 MB | 37,513 | 30,481,153 |
| Arabic | 84.2 GB | 8,718,929 | 6,103,711,887 |
| Aragonese | 10.6 kB | 12 | 51 |
| Armenian | 4.7 GB | 379,267 | 268,031,270 |
| Assamese | 221.2 MB | 17,084 | 11,109,557 |
| Asturian | 73.6 kB | 77 | 3,919 |
| Avaric | 18.6 kB | 14 | 582 |
| Azerbaijani | 3.5 GB | 491,847 | 291,927,692 |
| Bangla | 15.1 GB | 1,171,501 | 751,877,226 |
| Bashkir | 95.5 MB | 11,198 | 5,418,474 |
| Basque | 1.1 GB | 233,658 | 97,092,942 |
| Belarusian | 1.8 GB | 180,046 | 107,227,860 |
| Bihari languages | 24.2 kB | 27 | 569 |
| Bishnupriya | 2.0 MB | 271 | 98,419 |
| Bosnian | 10.3 kB | 10 | 422 |
| Breton | 33.7 MB | 16,119 | 3,111,619 |
| Bulgarian | 35.1 GB | 2,887,115 | 2,405,981,285 |
| Burmese | 1.9 GB | 158,733 | 44,835,970 |
| Catalan | 13.9 GB | 2,627,307 | 1,508,919,864 |
| Cebuano | 44.6 MB | 5,742 | 5,253,785 |
| Central Kurdish | 716.4 MB | 84,950 | 43,913,025 |
| Chechen | 14.0 MB | 4,086 | 798,766 |
| Chinese | 900.9 GB | 56,524,518 | 23,149,203,886 |
| Chuvash | 41.8 MB | 4,750 | 2,465,782 |
| Cornish | 1.4 kB | 2 | 55 |
| Croatian | 11.2 MB | 11,462 | 505,369 |
| Czech | 58.6 GB | 10,381,916 | 5,452,724,456 |
| Danish | 12.6 GB | 2,265,479 | 1,454,439,292 |
| Dimli (individual language) | 706 Bytes | 1 | 19 |
| Divehi | 217.2 MB | 24,067 | 10,112,205 |
| Dutch | 114.0 GB | 20,206,532 | 12,329,127,151 |
| Eastern Mari | 11.3 MB | 1,612 | 641,525 |
| Egyptian Arabic | 2.8 MB | 1,256 | 176,096 |
| English | 3.2 TB | 431,992,659 | 377,376,402,775 |
| Esperanto | 558.3 MB | 111,932 | 58,416,628 |
| Estonian | 9.2 GB | 1,362,524 | 820,975,443 |
| Filipino | 646.5 MB | 70,394 | 81,881,278 |
| Finnish | 37.8 GB | 4,948,961 | 2,900,615,928 |
| French | 382.2 GB | 52,037,098 | 41,713,990,658 |
| Galician | 255.2 MB | 88,803 | 27,051,212 |
| Georgian | 7.1 GB | 488,588 | 281,430,479 |
| German | 496.7 GB | 70,075,424 | 46,826,676,844 |
| Goan Konkani | 787.2 kB | 46 | 38,831 |
| Greek | 78.3 GB | 6,738,546 | 5,031,242,803 |
| Guarani | 9.0 kB | 10 | 374 |
| Gujarati | 4.8 GB | 136,467 | 301,170,777 |
| Hebrew | 30.3 GB | 3,132,396 | 2,249,377,984 |
| Hindi | 23.3 GB | 1,529,907 | 1,534,799,198 |
| Hungarian | 53.9 GB | 6,866,062 | 4,598,787,907 |
| Icelandic | 2.0 GB | 396,183 | 210,365,124 |
| Ido | 77.3 kB | 105 | 2,690 |
| Iloko | 97.9 kB | 75 | 8,592 |
| Indonesian | 17.4 GB | 2,244,622 | 1,984,195,207 |
| Interlingua | 40.2 kB | 6 | 10,125 |
| Irish | 45.6 MB | 12,233 | 4,877,850 |
| Italian | 229.3 GB | 28,502,092 | 24,294,684,830 |
| Japanese | 258.7 GB | 36,328,931 | 5,592,948,356 |
| Javanese | 152.7 kB | 70 | 10,441 |
| Kalmyk | 9.3 kB | 9 | 250 |
| Kannada | 2.6 GB | 150,850 | 108,450,571 |
| Karachay-Balkar | 119.6 kB | 91 | 4,089 |
| Kazakh | 2.9 GB | 261,085 | 157,267,307 |
| Khmer | 1.9 GB | 121,910 | 30,564,131 |
| Komi | 119.9 kB | 127 | 3,335 |
| Korean | 51.8 GB | 5,881,481 | 3,854,968,649 |
| Kurdish | 150.3 MB | 29,906 | 17,390,759 |
| Kyrgyz | 518.6 MB | 62,244 | 28,028,986 |
| Lao | 337.1 MB | 28,914 | 6,682,982 |
| Latin | 4.1 MB | 4,397 | 187,446 |
| Latvian | 8.2 GB | 1,032,987 | 707,361,898 |
| Lezghian | 375.5 kB | 124 | 19,250 |
| Limburgish | 1.4 kB | 2 | 41 |
| Lithuanian | 20.0 GB | 2,303,070 | 1,712,802,056 |
| Lojban | 1.9 MB | 570 | 260,542 |
| Lombard | 2.6 kB | 2 | 225 |
| Low German | 9.0 MB | 1,938 | 1,012,561 |
| Lower Sorbian | 707 Bytes | 1 | 17 |
| Luxembourgish | 15.8 MB | 5,108 | 1,545,946 |
| Macedonian | 3.6 GB | 341,775 | 244,058,579 |
| Maithili | 21.6 kB | 23 | 483 |
| Malagasy | 57.3 MB | 3,028 | 7,279,056 |
| Malay | 5.3 MB | 5,228 | 217,818 |
| Malayalam | 4.1 GB | 250,972 | 137,831,247 |
| Maltese | 2.5 MB | 2,208 | 118,190 |
| Marathi | 3.3 GB | 250,376 | 160,179,233 |
| Mazanderani | 128.2 kB | 76 | 7,337 |
| Minangkabau | 6.0 MB | 585 | 614,613 |
| Mingrelian | 7.6 MB | 2,550 | 253,333 |
| Mongolian | 2.8 GB | 237,719 | 176,405,432 |
| Nahuatl languages | 8.7 kB | 12 | 179 |
| Nepali | 3.7 GB | 391,947 | 177,885,116 |
| Newari | 5.7 MB | 1,134 | 273,837 |
| Norwegian | 2.8 GB | 973,188 | 279,182,902 |
| Norwegian Nynorsk | 6.8 MB | 5,835 | 459,183 |
| Occitan | 2.1 MB | 373 | 31,061 |
| Odia | 487.9 MB | 52,942 | 23,755,902 |
| Ossetic | 13.9 MB | 3,560 | 800,430 |
| Pashto | 490.3 MB | 50,312 | 46,293,249 |
| Persian | 77.4 GB | 7,665,871 | 6,430,164,396 |
| Piedmontese | 1.7 MB | 698 | 188,270 |
| Polish | 139.0 GB | 19,301,137 | 12,584,498,906 |
| Portuguese | 170.3 GB | 23,735,707 | 18,441,864,893 |
| Punjabi | 1.1 GB | 68,094 | 70,068,604 |
| Quechua | 744 Bytes | 1 | 14 |
| Romanian | 49.2 GB | 4,624,764 | 5,261,803,995 |
| Russia Buriat | 32.9 kB | 39 | 785 |
| Russian | 1.1 TB | 76,060,844 | 62,811,122,663 |
| Sakha | 65.6 MB | 6,284 | 3,473,813 |
| Sanskrit | 136.0 MB | 4,472 | 5,671,369 |
| Scottish Gaelic | 137.7 kB | 136 | 7,769 |
| Serbian | 6.9 GB | 577,472 | 482,932,670 |
| Serbian (Latin) | 931.8 kB | 738 | 92,875 |
| Sicilian | 1.5 kB | 2 | 50 |
| Sindhi | 117.1 MB | 15,516 | 10,685,611 |
| Sinhala | 2.0 GB | 108,593 | 113,179,741 |
| Slovak | 16.5 GB | 2,409,555 | 1,619,121,944 |
| Slovenian | 1.2 GB | 351,894 | 118,400,246 |
| Somali | 2.1 kB | 3 | 109 |
| South Azerbaijani | 14.1 MB | 5,381 | 693,746 |
| Spanish | 381.9 GB | 51,386,247 | 42,829,835,316 |
| Sundanese | 5.0 MB | 263 | 547,145 |
| Swahili | 1.3 MB | 462 | 123,050 |
| Swedish | 48.0 GB | 7,541,278 | 5,078,331,128 |
| Tajik | 870.9 MB | 46,366 | 56,627,727 |
| Tamil | 11.4 GB | 556,772 | 452,343,748 |
| Tatar | 915.3 MB | 76,398 | 51,875,265 |
| Telugu | 3.4 GB | 249,756 | 137,752,065 |
| Thai | 66.1 GB | 5,030,254 | 1,626,779,846 |
| Tibetan | 234.5 MB | 18,683 | 2,286,269 |
| Turkish | 75.1 GB | 10,826,031 | 6,421,221,358 |
| Turkmen | 4.4 MB | 2,485 | 276,632 |
| Ukrainian | 48.8 GB | 4,558,214 | 2,879,585,992 |
| Emiliano-Romagnolo[eml] | 901 Bytes | 1 | 53 |
| Upper Sorbian | 132.8 kB | 110 | 8,825 |
| Urdu | 3.4 GB | 336,994 | 332,816,354 |
| Uyghur | 201.9 MB | 18,556 | 11,240,889 |
| Uzbek | 19.9 MB | 9,526 | 1,370,842 |
| Vietnamese | 98.9 GB | 9,587,233 | 12,283,185,482 |
| Volapük | 825.9 kB | 661 | 57,039 |
| Walloon | 105.7 kB | 138 | 4,386 |
| Waray | 7.6 MB | 933 | 830,872 |
| Welsh | 409.3 MB | 90,378 | 49,488,495 |
| Western Frisian | 75.3 MB | 21,946 | 6,357,929 |
| Western Mari | 743.5 kB | 155 | 43,916 |
| Western Panjabi | 46.7 MB | 6,790 | 4,060,419 |
| Wu Chinese | 137.2 kB | 88 | 3,056 |
| Yiddish | 232.5 MB | 23,418 | 15,809,780 |
| Yoruba | 24.7 kB | 26 | 1,042 |
## Dataset Creation
### Curation Rationale
OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText).
The pipeline works on documents rather than lines.
`Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy.
Threading is done at shard, record and sentence level, making the whole generation process much more efficient.
Filtering will be explained in a future blog post at our [website](https://oscar-corpus.com)
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR 22.01, the **November/December 2021** snapshot was used. It is composed by 64 000 compressed text files containing documents and their headers.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Julien Abadji](https://ujj.space), [Pedro Ortiz Suarez](https://portizs.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@ARTICLE{2022arXiv220106642A,
author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t},
title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = jan,
eid = {arXiv:2201.06642},
pages = {arXiv:2201.06642},
archivePrefix = {arXiv},
eprint = {2201.06642},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@ARTICLE{caswell-etal-2021-quality,
author = {{Caswell}, Isaac and {Kreutzer}, Julia and {Wang}, Lisa and {Wahab}, Ahsan and {van Esch}, Daan and {Ulzii-Orshikh}, Nasanbayar and {Tapo}, Allahsera and {Subramani}, Nishant and {Sokolov}, Artem and {Sikasote}, Claytone and {Setyawan}, Monang and {Sarin}, Supheakmungkol and {Samb}, Sokhar and {Sagot}, Beno{\^\i}t and {Rivera}, Clara and {Rios}, Annette and {Papadimitriou}, Isabel and {Osei}, Salomey and {Ortiz Su{\'a}rez}, Pedro Javier and {Orife}, Iroro and {Ogueji}, Kelechi and {Niyongabo}, Rubungo Andre and {Nguyen}, Toan Q. and {M{\"u}ller}, Mathias and {M{\"u}ller}, Andr{\'e} and {Hassan Muhammad}, Shamsuddeen and {Muhammad}, Nanda and {Mnyakeni}, Ayanda and {Mirzakhalov}, Jamshidbek and {Matangira}, Tapiwanashe and {Leong}, Colin and {Lawson}, Nze and {Kudugunta}, Sneha and {Jernite}, Yacine and {Jenny}, Mathias and {Firat}, Orhan and {Dossou}, Bonaventure F.~P. and {Dlamini}, Sakhile and {de Silva}, Nisansa and {{\c{C}}abuk Ball{\i}}, Sakine and {Biderman}, Stella and {Battisti}, Alessia and {Baruwa}, Ahmed and {Bapna}, Ankur and {Baljekar}, Pallavi and {Abebe Azime}, Israel and {Awokoya}, Ayodele and {Ataman}, Duygu and {Ahia}, Orevaoghene and {Ahia}, Oghenefego and {Agrawal}, Sweta and {Adeyemi}, Mofetoluwa},
title = "{Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Computer Science - Artificial Intelligence},
year = 2021,
month = mar,
eid = {arXiv:2103.12028},
pages = {arXiv:2103.12028},
archivePrefix = {arXiv},
eprint = {2103.12028},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210312028C},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox), [@Uinelj](https://github.com/Uinelj) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
mteb/mtop_intent | 2022-09-27T19:10:23.000Z | [
"language:de",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:th",
"region:us"
] | mteb | null | null | null | 2 | 932 | ---
language:
- de
- en
- es
- fr
- hi
- th
--- |
mattmdjaga/human_parsing_dataset | 2023-09-11T09:07:44.000Z | [
"task_categories:image-segmentation",
"task_ids:semantic-segmentation",
"size_categories:10K<n<100K",
"region:us"
] | mattmdjaga | null | null | null | 10 | 932 | ---
size_categories:
- 10K<n<100K
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
splits:
- name: train
num_bytes: 5892290030.116
num_examples: 17706
download_size: 5893438158
dataset_size: 5892290030.116
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Human parsing data (ATR)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset has 17,706 images and mask pairs. It is just a copy of
[Deep Human Parsing](https://github.com/lemondan/HumanParsing-Dataset) ATR dataset. The mask labels are:
"0": "Background",
"1": "Hat",
"2": "Hair",
"3": "Sunglasses",
"4": "Upper-clothes",
"5": "Skirt",
"6": "Pants",
"7": "Dress",
"8": "Belt",
"9": "Left-shoe",
"10": "Right-shoe",
"11": "Face",
"12": "Left-leg",
"13": "Right-leg",
"14": "Left-arm",
"15": "Right-arm",
"16": "Bag",
"17": "Scarf"
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
```bibtex
@ARTICLE{ATR, author={Xiaodan Liang and Si Liu and Xiaohui Shen and Jianchao Yang and Luoqi Liu and Jian Dong and Liang Lin and Shuicheng Yan}, journal={Pattern Analysis and Machine Intelligence, IEEE Transactions on}, title={Deep Human Parsing with Active Template Regression}, year={2015}, volume={37}, number={12}, pages={2402-2414}, doi={10.1109/TPAMI.2015.2408360}, ISSN={0162-8828}, month={Dec}}
@InProceedings{CO-CNN, author={Xiaodan Liang and Chunyan Xu and Xiaohui Shen and Jianchao Yang and Si Liu and Jinhui Tang and Liang Lin and Shuicheng Yan}, journal ={Pattern Analysis and Machine Intelligence, IEEE Transactions on}, title={ICCV}, year={2015}, }
``` |
quac | 2023-01-25T14:43:01.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:mit",
"arxiv:1808.07036",
"region:us"
] | null | Question Answering in Context is a dataset for modeling, understanding,
and participating in information seeking dialog. Data instances consist
of an interactive dialog between two crowd workers: (1) a student who
poses a sequence of freeform questions to learn as much as possible
about a hidden Wikipedia text, and (2) a teacher who answers the questions
by providing short excerpts (spans) from the text. QuAC introduces
challenges not found in existing machine comprehension datasets: its
questions are often more open-ended, unanswerable, or only meaningful
within the dialog context. | @inproceedings{choi-etal-2018-quac,
title = "QUAC: Question answering in context",
abstract = "We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as we show in a detailed qualitative evaluation. We also report results for a number of reference models, including a recently state-of-the-art reading comprehension architecture extended to model dialog context. Our best model underperforms humans by 20 F1, suggesting that there is significant room for future work on this data. Dataset, baseline, and leaderboard available at http://quac.ai.",
author = "Eunsol Choi and He He and Mohit Iyyer and Mark Yatskar and Yih, {Wen Tau} and Yejin Choi and Percy Liang and Luke Zettlemoyer",
year = "2018",
language = "English (US)",
series = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018",
publisher = "Association for Computational Linguistics",
pages = "2174--2184",
editor = "Ellen Riloff and David Chiang and Julia Hockenmaier and Jun'ichi Tsujii",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018",
note = "2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 ; Conference date: 31-10-2018 Through 04-11-2018",
} | null | 13 | 930 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
- extractive-qa
paperswithcode_id: quac
pretty_name: Question Answering in Context
dataset_info:
features:
- name: dialogue_id
dtype: string
- name: wikipedia_page_title
dtype: string
- name: background
dtype: string
- name: section_title
dtype: string
- name: context
dtype: string
- name: turn_ids
sequence: string
- name: questions
sequence: string
- name: followups
sequence:
class_label:
names:
'0': y
'1': n
'2': m
- name: yesnos
sequence:
class_label:
names:
'0': y
'1': n
'2': x
- name: answers
sequence:
- name: texts
sequence: string
- name: answer_starts
sequence: int32
- name: orig_answers
struct:
- name: texts
sequence: string
- name: answer_starts
sequence: int32
config_name: plain_text
splits:
- name: train
num_bytes: 58174754
num_examples: 11567
- name: validation
num_bytes: 7375938
num_examples: 1000
download_size: 77043986
dataset_size: 65550692
---
# Dataset Card for Question Answering in Context
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [QuAC](https://quac.ai/)
- **Paper:** [QuAC: Question Answering in Context](https://arxiv.org/abs/1808.07036)
- **Leaderboard:** [QuAC's leaderboard](https://quac.ai/)
- **Point of Contact:** [Google group](https://groups.google.com/forum/#!forum/quac_ai)
### Dataset Summary
Question Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context.
### Supported Tasks and Leaderboards
The core problem involves predicting a text span to answer a question about a Wikipedia section (extractive question answering). Since QuAC questions include a dialog component, each instance includes a “dialog history” of questions and answers asked in the dialog prior to the given question, along with some additional metadata.
Authors provided [an official evaluation script](https://s3.amazonaws.com/my89public/quac/scorer.py) for evaluation.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A validation examples looks like this (one entry per dialogue):
```
{
'dialogue_id': 'C_6abd2040a75d47168a9e4cca9ca3fed5_0',
'wikipedia_page_title': 'Satchel Paige',
'background': 'Leroy Robert "Satchel" Paige (July 7, 1906 - June 8, 1982) was an American Negro league baseball and Major League Baseball (MLB) pitcher who became a legend in his own lifetime by being known as perhaps the best pitcher in baseball history, by his longevity in the game, and by attracting record crowds wherever he pitched. Paige was a right-handed pitcher, and at age 42 in 1948, he was the oldest major league rookie while playing for the Cleveland Indians. He played with the St. Louis Browns until age 47, and represented them in the All-Star Game in 1952 and 1953.',
'section_title': 'Chattanooga and Birmingham: 1926-29',
'context': 'A former friend from the Mobile slums, Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige and offered to pay him $250 per month, of which Paige would collect $50 with the rest going to his mother. He also agreed to pay Lula Paige a $200 advance, and she agreed to the contract. The local newspapers--the Chattanooga News and Chattanooga Times--recognized from the beginning that Paige was special. In April 1926, shortly after his arrival, he recorded nine strikeouts over six innings against the Atlanta Black Crackers. Part way through the 1927 season, Paige\'s contract was sold to the Birmingham Black Barons of the major Negro National League (NNL). According to Paige\'s first memoir, his contract was for $450 per month, but in his second he said it was for $275. Pitching for the Black Barons, Paige threw hard but was wild and awkward. In his first big game in late June 1927, against the St. Louis Stars, Paige incited a brawl when his fastball hit the hand of St. Louis catcher Mitchell Murray. Murray then charged the mound and Paige raced for the dugout, but Murray flung his bat and struck Paige above the hip. The police were summoned, and the headline of the Birmingham Reporter proclaimed a "Near Riot." Paige improved and matured as a pitcher with help from his teammates, Sam Streeter and Harry Salmon, and his manager, Bill Gatewood. He finished the 1927 season 7-1 with 69 strikeouts and 26 walks in 89 1/3 innings. Over the next two seasons, Paige went 12-5 and 10-9 while recording 176 strikeouts in 1929. (Several sources credit his 1929 strikeout total as the all-time single-season record for the Negro leagues, though there is variation among the sources about the exact number of strikeouts.) On April 29 of that season he recorded 17 strikeouts in a game against the Cuban Stars, which exceeded what was then the major league record of 16 held by Noodles Hahn and Rube Waddell. Six days later he struck out 18 Nashville Elite Giants, a number that was tied in the white majors by Bob Feller in 1938. Due to his increased earning potential, Barons owner R. T. Jackson would "rent" Paige out to other ball clubs for a game or two to draw a decent crowd, with both Jackson and Paige taking a cut. CANNOTANSWER',
'turn_ids': ['C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#0', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#1', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#2', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#3', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#4', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#5', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#6', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#7'],
'questions': ['what did he do in Chattanooga', 'how did he discover him', 'what position did he play', 'how did they help him', 'when did he go to Birmingham', 'how did he feel about this', 'how did he do with this team', 'What made him leave the team'],
'followups': [0, 2, 0, 1, 0, 1, 0, 1],
'yesnos': [2, 2, 2, 2, 2, 2, 2, 2]
'answers': {
'answer_starts': [
[480, 39, 0, 67, 39],
[2300, 2300, 2300],
[848, 1023, 848, 848, 1298],
[2300, 2300, 2300, 2300, 2300],
[600, 600, 600, 634, 600],
[2300, 2300, 2300],
[939, 1431, 848, 848, 1514],
[2106, 2106, 2165]
],
'texts': [
['April 1926, shortly after his arrival, he recorded nine strikeouts over six innings against the Atlanta Black Crackers.', 'Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige', 'A former friend from the Mobile slums, Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League.', 'manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige and offered to pay him $250 per month,', 'Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige and offered to pay him $250 per month,'],
['CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER'],
['Pitching for the Black Barons,', 'fastball', 'Pitching for', 'Pitching', 'Paige improved and matured as a pitcher with help from his teammates,'], ['CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER'],
["Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons", "Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons", "Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons", "Paige's contract was sold to the Birmingham Black Barons of the major Negro National League (NNL", "Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons"], ['CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER'],
['game in late June 1927, against the St. Louis Stars, Paige incited a brawl when his fastball hit the hand of St. Louis catcher Mitchell Murray.', 'He finished the 1927 season 7-1 with 69 strikeouts and 26 walks in 89 1/3 innings.', 'Pitching for the Black Barons, Paige threw hard but was wild and awkward.', 'Pitching for the Black Barons, Paige threw hard but was wild and awkward.', 'Over the next two seasons, Paige went 12-5 and 10-9 while recording 176 strikeouts in 1929. ('],
['Due to his increased earning potential, Barons owner R. T. Jackson would "rent" Paige out to other ball clubs', 'Due to his increased earning potential, Barons owner R. T. Jackson would "rent" Paige out to other ball clubs for a game or two to draw a decent crowd,', 'Jackson would "rent" Paige out to other ball clubs for a game or two to draw a decent crowd, with both Jackson and Paige taking a cut.']
]
},
'orig_answers': {
'answer_starts': [39, 2300, 1298, 2300, 600, 2300, 1514, 2165],
'texts': ['Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige and offered to pay him $250 per month,', 'CANNOTANSWER', 'Paige improved and matured as a pitcher with help from his teammates,', 'CANNOTANSWER', "Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons", 'CANNOTANSWER', 'Over the next two seasons, Paige went 12-5 and 10-9 while recording 176 strikeouts in 1929. (', 'Jackson would "rent" Paige out to other ball clubs for a game or two to draw a decent crowd, with both Jackson and Paige taking a cut.']
},
}
```
### Data Fields
- `dialogue_id`: ID of the dialogue.
- `wikipedia_page_title`: title of the Wikipedia page.
- `background`: first paragraph of the main Wikipedia article.
- `section_tile`: Wikipedia section title.
- `context`: Wikipedia section text.
- `turn_ids`: list of identification of dialogue turns. One list of ids per dialogue.
- `questions`: list of questions in the dialogue. One list of questions per dialogue.
- `followups`: list of followup actions in the dialogue. One list of followups per dialogue. `y`: follow, `m`: maybe follow yp, `n`: don't follow up.
- `yesnos`: list of yes/no in the dialogue. One list of yes/nos per dialogue. `y`: yes, `n`: no, `x`: neither.
- `answers`: dictionary of answers to the questions (validation step of data collection)
- `answer_starts`: list of list of starting offsets. For training, list of single element lists (one answer per question).
- `texts`: list of list of span texts answering questions. For training, list of single element lists (one answer per question).
- `orig_answers`: dictionary of original answers (the ones provided by the teacher in the dialogue)
- `answer_starts`: list of starting offsets
- `texts`: list of span texts answering questions.
### Data Splits
QuAC contains 98,407 QA pairs from 13,594 dialogs. The dialogs were conducted on 8,854 unique sections from 3,611 unique Wikipedia articles, and every dialog contains between four and twelve questions.
The dataset comes with a train/dev split such that there is no overlap in sections across splits. Furthermore, the dev and test sets only include one
dialog per section, in contrast to the training set which can have multiple dialogs per section. Dev and test instances come with five reference answers instead of just one as in the training set; we obtain the extra references to improve the reliability of our evaluations, as questions can have multiple valid answer spans. The test set is not publicly available; instead, researchers must submit their models to the [leaderboard](http://quac.ai), which will run the model on our hidden test set.
The training set contains 83,568 questions (11,567 dialogues), while 7,354 (1,000) and 7,353 (1,002) separate questions are reserved for the dev and test set respectively.
## Dataset Creation
### Curation Rationale
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Source Data
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
#### Initial Data Collection and Normalization
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
#### Who are the source language producers?
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Annotations
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
#### Annotation process
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
#### Who are the annotators?
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Personal and Sensitive Information
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Discussion of Biases
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Other Known Limitations
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
## Additional Information
### Dataset Curators
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Licensing Information
The dataset is distributed under the MIT license.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@inproceedings{choi-etal-2018-quac,
title = "{Q}u{AC}: Question Answering in Context",
author = "Choi, Eunsol and
He, He and
Iyyer, Mohit and
Yatskar, Mark and
Yih, Wen-tau and
Choi, Yejin and
Liang, Percy and
Zettlemoyer, Luke",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1241",
doi = "10.18653/v1/D18-1241",
pages = "2174--2184",
abstract = "We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as we show in a detailed qualitative evaluation. We also report results for a number of reference models, including a recently state-of-the-art reading comprehension architecture extended to model dialog context. Our best model underperforms humans by 20 F1, suggesting that there is significant room for future work on this data. Dataset, baseline, and leaderboard available at \url{http://quac.ai}.",
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
SetFit/20_newsgroups | 2022-02-03T08:27:00.000Z | [
"region:us"
] | SetFit | null | null | null | 5 | 930 | This is a version of the [20 newsgroups dataset](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#the-20-newsgroups-text-dataset) that is provided in Scikit-learn. From the Scikit-learn docs:
> The 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics split in two subsets: one for training (or development) and the other one for testing (or for performance evaluation). The split between the train and test set is based upon a messages posted before and after a specific date.
We followed the [recommended practice](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#filtering-text-for-more-realistic-training) to remove headers, signature blocks, and quotations from each news article. |
HuggingFaceH4/oasst1_en | 2023-06-06T13:54:52.000Z | [
"license:apache-2.0",
"region:us"
] | HuggingFaceH4 | null | null | null | 24 | 928 | ---
license: apache-2.0
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_ift
num_bytes: 30802170.224582057
num_examples: 19111
- name: test_ift
num_bytes: 3423358.775417942
num_examples: 2124
download_size: 18666186
dataset_size: 34225529.0
---
# Dataset Card for `oasst1_en`
This dataset is a processed version of [OpenAssistant's `oasst1` dataset](https://huggingface.co/datasets/OpenAssistant/oasst1) to:
* Filter all conversations for English.
* Group all conversation trees such that each row in the dataset corresponds to a single conversation.
See the `create_dataset.py` script in this repo for the processing details.
## Splits
| Split | Description | Size |
| :--- | :--- | :--- |
| `train` | The full training split | 19034 |
| `test` | The full test split | 2115 | |
ccdv/govreport-summarization | 2022-10-24T20:32:47.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"conditional-text-generation",
"arxiv:2104.02112",
"region:us"
] | ccdv | GovReport dataset for summarization.
From paper: Efficient Attentions for Long Document Summarization" by L. Huang et al.
See: https://arxiv.org/pdf/2104.02112.pdf
See: https://github.com/luyang-huang96/LongDocSum | @misc{huang2021efficient,
title={Efficient Attentions for Long Document Summarization},
author={Luyang Huang and Shuyang Cao and Nikolaus Parulian and Heng Ji and Lu Wang},
year={2021},
eprint={2104.02112},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
} | null | 9 | 926 | ---
language:
- en
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- summarization
- text-generation
task_ids: []
tags:
- conditional-text-generation
---
# GovReport dataset for summarization
Dataset for summarization of long documents.\
Adapted from this [repo](https://github.com/luyang-huang96/LongDocSum) and this [paper](https://arxiv.org/pdf/2104.02112.pdf)\
This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable:
```python
"ccdv/govreport-summarization": ("report", "summary")
```
### Data Fields
- `id`: paper id
- `report`: a string containing the body of the report
- `summary`: a string containing the summary of the report
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_. \
Token counts with a RoBERTa tokenizer.
| Dataset Split | Number of Instances | Avg. tokens |
| ------------- | --------------------|:----------------------|
| Train | 17,517 | < 9,000 / < 500 |
| Validation | 973 | < 9,000 / < 500 |
| Test | 973 | < 9,000 / < 500 |
# Cite original article
```
@misc{huang2021efficient,
title={Efficient Attentions for Long Document Summarization},
author={Luyang Huang and Shuyang Cao and Nikolaus Parulian and Heng Ji and Lu Wang},
year={2021},
eprint={2104.02112},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mteb/biorxiv-clustering-s2s | 2022-09-27T19:15:35.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 919 | ---
language:
- en
--- |
nelorth/oxford-flowers | 2022-12-11T02:38:31.000Z | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"source_datasets:https://www.robots.ox.ac.uk/~vgg/data/flowers",
"license:unknown",
"flowers",
"oxford",
"region:us"
] | nelorth | null | null | null | 6 | 916 | ---
pretty_name: Oxford Flowers Dataset
source_datasets: https://www.robots.ox.ac.uk/~vgg/data/flowers
tags:
- flowers
- oxford
task_categories:
- image-classification
- unconditional-image-generation
license:
- unknown
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '10'
'2': '100'
'3': '101'
'4': '102'
'5': '11'
'6': '12'
'7': '13'
'8': '14'
'9': '15'
'10': '16'
'11': '17'
'12': '18'
'13': '19'
'14': '2'
'15': '20'
'16': '21'
'17': '22'
'18': '23'
'19': '24'
'20': '25'
'21': '26'
'22': '27'
'23': '28'
'24': '29'
'25': '3'
'26': '30'
'27': '31'
'28': '32'
'29': '33'
'30': '34'
'31': '35'
'32': '36'
'33': '37'
'34': '38'
'35': '39'
'36': '4'
'37': '40'
'38': '41'
'39': '42'
'40': '43'
'41': '44'
'42': '45'
'43': '46'
'44': '47'
'45': '48'
'46': '49'
'47': '5'
'48': '50'
'49': '51'
'50': '52'
'51': '53'
'52': '54'
'53': '55'
'54': '56'
'55': '57'
'56': '58'
'57': '59'
'58': '6'
'59': '60'
'60': '61'
'61': '62'
'62': '63'
'63': '64'
'64': '65'
'65': '66'
'66': '67'
'67': '68'
'68': '69'
'69': '7'
'70': '70'
'71': '71'
'72': '72'
'73': '73'
'74': '74'
'75': '75'
'76': '76'
'77': '77'
'78': '78'
'79': '79'
'80': '8'
'81': '80'
'82': '81'
'83': '82'
'84': '83'
'85': '84'
'86': '85'
'87': '86'
'88': '87'
'89': '88'
'90': '89'
'91': '9'
'92': '90'
'93': '91'
'94': '92'
'95': '93'
'96': '94'
'97': '95'
'98': '96'
'99': '97'
'100': '98'
'101': '99'
splits:
- name: train
num_bytes: 308119477.446
num_examples: 7169
- name: test
num_bytes: 43247670.14
num_examples: 1020
download_size: 346597973
dataset_size: 351367147.58599997
---
# Dataset Card for "oxford-flowers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggan/flowers-102-categories | 2022-04-04T17:21:42.000Z | [
"region:us"
] | huggan | null | null | null | 4 | 908 | Entry not found |
wyzelabs/RuleRecommendation | 2023-09-15T19:26:50.000Z | [
"license:cc-by-nc-nd-4.0",
"IoT",
"Smart Home",
"Rule Recommendation",
"Recommendation Systems",
"region:us"
] | wyzelabs | null | null | null | 8 | 907 | ---
license: cc-by-nc-nd-4.0
extra_gated_heading: >-
Wyze Rule Recommendation Challenge Participation and Dataset Access Terms and
Conditions
extra_gated_prompt: >-
Please read the <a href="https://drive.google.com/uc?id=1eM3RQYeQUZeiIo8cqTgC3ixM17Vhd6QL" target="_blank">Wyze Rule Recommendation Challenge Participation and Dataset Access Terms and Conditions</a>
carefully. In order to gain access to the data and take part in the Wyze Rule
Recommendation challenge, you must first read and consent to these terms and
conditions.
extra_gated_fields:
Name: text
Affiliation: text
Email: text
I have read and agree to the Wyze Rule Recommendation Challenge Participation and Dataset Access Terms and Conditions: checkbox
tags:
- IoT
- Smart Home
- Rule Recommendation
- Recommendation Systems
pretty_name: Wyze Rule Recommendation Dataset
---
# Wyze Rule Recommendation Dataset
<img src="https://drive.google.com/uc?id=1TrgrQk8mWcwseDhP5htvJya8bThGUw1Y" alt="WRR" width="100%"/>
<!---
## Dataset Description
- **Paper:TBA**
- **Leaderboard:TBA**
- **Point of Contact:**
--->
## Dataset Summary
The Wyze Rule dataset is a new large-scale dataset designed specifically for smart home rule recommendation research. It contains over 1 million rules generated by 300,000 users from Wyze Labs, offering an extensive collection of real-world automation rules tailored to users' unique smart home setups.
The goal of the Wyze Rule dataset is to advance research and development of personalized rule recommendation systems for smart home automation. As smart devices proliferate in homes, automating their interactions becomes increasingly complex. Rules recommend how a user's devices could be connected to work together automatically, like a motion sensor triggering a camera to record. But with users having different devices, manually configuring these rules is difficult. This dataset enables creating intelligent algorithms that automatically recommend customized rules tailored to each user's specific smart home setup. By training machine learning models on the diverse real-world data of over 1 million rules from 300,000 Wyze users, researchers can build personalized recommendation systems. These would simplify and enhance automation for end users by suggesting rules that connect their devices in useful ways, while respecting their privacy. The Wyze Rule dataset provides the large-scale and varied data needed to make such personalized, private rule recommendation a reality.
The key features of this dataset are:
- Over 1 million automation rules governing how users' smart devices interact
- Rules are highly personalized based on each user's specific devices and needs
- 16 distinct device types like cameras, sensors, lights etc.
- There are 44 different trigger states and 46 different action by various devices.
- 1,641 unique trigger-action device and state (trigger_device + trigger_state + action + action_device) pairs capturing diverse automation logics
- Non-IID distribution among users makes it suitable for federated learning
- Allows development of personalized rule recommendation systems while preserving user privacy
- Enables benchmarking different algorithms on large-scale real-world data
Overall, the Wyze Rule dataset bridges the gap between rule recommendation research and practical applications, facilitating the creation of intelligent home automation systems. Its scale, diversity, and focus on individual users' needs make it a valuable resource for advancing personalized recommendation techniques.
## Dataset Structure
The Wyze Rule dataset contains two main CSV files - one for the rules and one for the devices owned by each user.
Each rule has attributes like user ID, trigger device, trigger state, action device, and action.
For example, a rule could be: user 123, contact sensor, "open", light bulb, "turn on".
This captures the trigger condition and the action to take. The device file maps user IDs to the specific devices owned by each user.
This is key because automating different device setups requires different valid rules.
With 16 device types and 1641 trigger-action state and device pairs, the rules reflect a user's customized needs.
Each user can have multiple instances of a device type, like several motion sensors.
The non-IID distribution of rules among 300,000 users with varying device combinations makes this dataset uniquely suitable for developing personalized federated learning algorithms for rule recommendation.
By separating rules into triggers and actions, the data structure provides flexibility lacking in user-item matrices that treat rules as single items.
Overall, the real-world granularity enables personalized automation.
### Data Fields
The main two files of this dataset, rules and devices, have the following fields:
1. Rule Dataset: This dataset contains data related to the rules that govern the behavior of Wyze smart home devices. Each row represents a single rule and contains various attributes describing the rule. The attributes of this file are as follows:
+ `user_id` (int): A unique integer identifier for the user associated with the rule. This identifier has been anonymized and does not contain any information related to the Wyze users.
+ `trigger_device` (str): The model of the device that triggers the rule when a specific condition is met. It may be a Wyze smart home device such as a sensor or a camera.
+ `trigger_device_id` (int): A unique integer identifier for the trigger device.
+ `trigger_state` (str): The state or condition that needs to be met on the trigger device for the rule to be activated. It may represent values such as "on," "off," "motion detected," or "sensor open."
+ `trigger_state_id` (int): A unique integer identifier for the trigger state.
+ `action` (str): The action to be executed on the action device when the rule is triggered. It may include values like "power on," "power off," "start recording," or "change brightness."
+ `action_id` (int): A unique integer identifier for the action.
+ `action_device` (str): The model of the device that performs an action when the rule is triggered. It is a Wyze smart home device such as a light or a camera.
+ `action_device_id` (int): A unique integer identifier for the action device.
+ `rule` (str): The combination of 4 ids as follows: `trigger_device_id`\_\_`trigger_state_id`\_\_`action_id`\_\_`action_device_id`
3. Device Dataset: This file contains data related to the devices owned by users. Each row represents a single device and contains information about the device model and its association with a specific user. There are a number of devices in this dataset that are not used in any rules by users, and hence, are not present in the rule dataset. The attributes of this dataset are as follows:
+ `user_id` (int): A unique integer identifier for the user associated with the device.
+ `device_id` (int): A unique integer identifier for the device.
+ `device_model` (str): The model or type of the device owned by the user. It represents various Wyze smart home devices such as a camera, a sensor, or a switch
There are a total of 16 different device types included in this dataset as follows:
1. `Camera`
2. `ClimateSensor`
3. `Cloud`
4. `ContactSensor`
5. `Irrigation`
6. `LeakSensor`
7. `Light`
8. `LightStrip`
9. `Lock`
10. `MeshLight`
11. `MotionSensor`
12. `OutdoorPlug`
13. `Plug`
14. `RobotVacuum`
15. `Switch`
16. `Thermostat`
### Data Splits
We have two public splits, which are `train` and `test`. The `train` split contains all the available rules set by the users in the dataset, as well as their device list. In the `test` dataset, for each user in this dataset, we have omitted one rule at random. The goal of building recommendation system is to recommend that omitted rule with high probability. The ground truth for this dataset will be released after the Wyze Rule Recommendation challenge has finished.
### Personal and Sensitive Information
Protecting user privacy was a top priority when creating the Wyze Rule dataset.
Any personally identifiable information or sensitive data that could reveal users' identities has been meticulously obscured.
The user IDs have been anonymized into random numeric values, removing any links to actual Wyze users.
The rules simply capture abstract triggers and actions for automation using generic device types.
By only retaining high-level functionality while erasing all personal attributes, the Wyze Rule dataset enables developing personalized recommendation algorithms without compromising user privacy.
Researchers can leverage this rich real-world data to advance the field of automation systems significantly while ensuring ethical data practices.
The dataset creators' commitment to protecting users' privacy will help propel innovation responsibly.
## Considerations for Using the Data
This data is mainly released for the [Wyze Rule Recommendation Challenge](https://huggingface.co/spaces/competitions/wyze-rule-recommendation).
### Licensing Information
This dataset is licensed by cc-by-nc-nd-4.0, which prohibits commercial use, distribution, modification, and reproduction of the data without permission from the copyright holder.
### Citation Information
TBA
|
mteb/reddit-clustering | 2022-09-27T19:13:31.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 894 | ---
language:
- en
--- |
dongyoung4091/hh-generated_flan_t5_rx_xl_all | 2023-09-03T02:17:32.000Z | [
"region:us"
] | dongyoung4091 | null | null | null | 0 | 894 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: response
dtype: string
- name: prompt
dtype: string
- name: model_A
dtype: float64
- name: model_B
dtype: float64
- name: external_rm1
dtype: float64
- name: external_rm2
dtype: float64
- name: RM_enough-detail
dtype: float64
- name: RM_fail-to-consider-context
dtype: float64
- name: RM_readability
dtype: float64
- name: zeroshot_helpfulness
dtype: float64
- name: zeroshot_specificity
dtype: float64
- name: zeroshot_intent
dtype: float64
- name: zeroshot_factuality
dtype: float64
- name: zeroshot_easy-to-understand
dtype: float64
- name: zeroshot_relevance
dtype: float64
- name: zeroshot_readability
dtype: float64
- name: zeroshot_enough-detail
dtype: float64
- name: 'zeroshot_biased:'
dtype: float64
- name: zeroshot_fail-to-consider-individual-preferences
dtype: float64
- name: zeroshot_repetetive
dtype: float64
- name: zeroshot_fail-to-consider-context
dtype: float64
- name: zeroshot_too-long
dtype: float64
splits:
- name: train
num_bytes: 7769957
num_examples: 25600
download_size: 3659087
dataset_size: 7769957
---
# Dataset Card for "hh-generated_flan_t5_rx_xl_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xquad_r | 2023-06-01T14:59:54.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:extended|squad",
"source_datasets:extended|xquad",
"language:ar",
"language:de",
"language:el",
"language:en",
"language:es",
"language:hi",
"language:ru",
"language:th",
"language:tr",
"language:vi",
"language:zh",
"license:cc-by-sa-4.0",
"arxiv:2004.05484",
"region:us"
] | null | XQuAD-R is a retrieval version of the XQuAD dataset (a cross-lingual extractive QA dataset). Like XQuAD, XQUAD-R is an 11-way parallel dataset, where each question appears in 11 different languages and has 11 parallel correct answers across the languages. | @article{roy2020lareqa,
title={LAReQA: Language-agnostic answer retrieval from a multilingual pool},
author={Roy, Uma and Constant, Noah and Al-Rfou, Rami and Barua, Aditya and Phillips, Aaron and Yang, Yinfei},
journal={arXiv preprint arXiv:2004.05484},
year={2020}
} | null | 2 | 893 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ar
- de
- el
- en
- es
- hi
- ru
- th
- tr
- vi
- zh
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|squad
- extended|xquad
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: xquad-r
pretty_name: LAReQA
dataset_info:
- config_name: ar
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1722799
num_examples: 1190
download_size: 17863417
dataset_size: 1722799
- config_name: de
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1283301
num_examples: 1190
download_size: 17863417
dataset_size: 1283301
- config_name: zh
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 984241
num_examples: 1190
download_size: 17863417
dataset_size: 984241
- config_name: vi
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1477239
num_examples: 1190
download_size: 17863417
dataset_size: 1477239
- config_name: en
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1116123
num_examples: 1190
download_size: 17863417
dataset_size: 1116123
- config_name: es
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1273499
num_examples: 1190
download_size: 17863417
dataset_size: 1273499
- config_name: hi
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 2682975
num_examples: 1190
download_size: 17863417
dataset_size: 2682975
- config_name: el
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 2206690
num_examples: 1190
download_size: 17863417
dataset_size: 2206690
- config_name: th
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 2854959
num_examples: 1190
download_size: 17863417
dataset_size: 2854959
- config_name: tr
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1210763
num_examples: 1190
download_size: 17863417
dataset_size: 1210763
- config_name: ru
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 2136990
num_examples: 1190
download_size: 17863417
dataset_size: 2136990
config_names:
- ar
- de
- el
- en
- es
- hi
- ru
- th
- tr
- vi
- zh
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LAReQA](https://github.com/google-research-datasets/lareqa)
- **Repository:** [XQuAD-R](https://github.com/google-research-datasets/lareqa)
- **Paper:** [LAReQA: Language-agnostic answer retrieval from a multilingual pool](https://arxiv.org/pdf/2004.05484.pdf)
- **Point of Contact:** [Noah Constant](mailto:nconstant@google.com)
### Dataset Summary
XQuAD-R is a retrieval version of the XQuAD dataset (a cross-lingual extractive
QA dataset). Like XQuAD, XQUAD-R is an 11-way parallel dataset, where each
question appears in 11 different languages and has 11 parallel correct answers
across the languages.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset can be found with the following languages:
* Arabic: `xquad-r/ar.json`
* German: `xquad-r/de.json`
* Greek: `xquad-r/el.json`
* English: `xquad-r/en.json`
* Spanish: `xquad-r/es.json`
* Hindi: `xquad-r/hi.json`
* Russian: `xquad-r/ru.json`
* Thai: `xquad-r/th.json`
* Turkish: `xquad-r/tr.json`
* Vietnamese: `xquad-r/vi.json`
* Chinese: `xquad-r/zh.json`
## Dataset Structure
[More Information Needed]
### Data Instances
An example from `en` config:
```
{'id': '56beb4343aeaaa14008c925b',
'context': "The Panthers defense gave up just 308 points, ranking sixth in the league, while also leading the NFL in interceptions with 24 and boasting four Pro Bowl selections. Pro Bowl defensive tackle Kawann Short led the team in sacks with 11, while also forcing three fumbles and recovering two. Fellow lineman Mario Addison added 6½ sacks. The Panthers line also featured veteran defensive end Jared Allen, a 5-time pro bowler who was the NFL's active career sack leader with 136, along with defensive end Kony Ealy, who had 5 sacks in just 9 starts. Behind them, two of the Panthers three starting linebackers were also selected to play in the Pro Bowl: Thomas Davis and Luke Kuechly. Davis compiled 5½ sacks, four forced fumbles, and four interceptions, while Kuechly led the team in tackles (118) forced two fumbles, and intercepted four passes of his own. Carolina's secondary featured Pro Bowl safety Kurt Coleman, who led the team with a career high seven interceptions, while also racking up 88 tackles and Pro Bowl cornerback Josh Norman, who developed into a shutdown corner during the season and had four interceptions, two of which were returned for touchdowns.",
'question': 'How many points did the Panthers defense surrender?',
'answers': {'text': ['308'], 'answer_start': [34]}}
```
### Data Fields
- `id` (`str`): Unique ID for the context-question pair.
- `context` (`str`): Context for the question.
- `question` (`str`): Question.
- `answers` (`dict`): Answers with the following keys:
- `text` (`list` of `str`): Texts of the answers.
- `answer_start` (`list` of `int`): Start positions for every answer text.
### Data Splits
The number of questions and candidate sentences for each language for XQuAD-R is shown in the table below:
| | XQuAD-R | |
|-----|-----------|------------|
| | questions | candidates |
| ar | 1190 | 1222 |
| de | 1190 | 1276 |
| el | 1190 | 1234 |
| en | 1190 | 1180 |
| es | 1190 | 1215 |
| hi | 1190 | 1244 |
| ru | 1190 | 1219 |
| th | 1190 | 852 |
| tr | 1190 | 1167 |
| vi | 1190 | 1209 |
| zh | 1190 | 1196 |
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
The dataset was initially created by Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips and Yinfei Yang, during work done at Google Research.
### Licensing Information
XQuAD-R is distributed under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/legalcode).
### Citation Information
```
@article{roy2020lareqa,
title={LAReQA: Language-agnostic answer retrieval from a multilingual pool},
author={Roy, Uma and Constant, Noah and Al-Rfou, Rami and Barua, Aditya and Phillips, Aaron and Yang, Yinfei},
journal={arXiv preprint arXiv:2004.05484},
year={2020}
}
```
### Contributions
Thanks to [@manandey](https://github.com/manandey) for adding this dataset. |
Muennighoff/xP3x-sample | 2023-09-18T13:51:06.000Z | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"language:af",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:ch",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gn",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:ko",
"language:ku",
"language:kw",
"language:la",
"language:lb",
"language:lt",
"language:lv",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nb",
"language:nl",
"language:nn",
"language:no",
"language:oc",
"language:pl",
"language:pt",
"language:qu",
"language:rn",
"language:ro",
"language:ru",
"language:sh",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vo",
"language:yi",
"language:zh",
"language:ace",
"language:acm",
"language:acq",
"language:aeb",
"language:ajp",
"language:ak",
"language:als",
"language:am",
"language:apc",
"language:ars",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:awa",
"language:ayr",
"language:azb",
"language:azj",
"language:ba",
"language:bm",
"language:ban",
"language:bem",
"language:bho",
"language:bjn",
"language:bo",
"language:bug",
"language:ceb",
"language:cjk",
"language:ckb",
"language:crh",
"language:dik",
"language:dyu",
"language:dz",
"language:ee",
"language:fj",
"language:fon",
"language:fur",
"language:fuv",
"language:gaz",
"language:gu",
"language:ht",
"language:ha",
"language:hne",
"language:ig",
"language:ilo",
"language:kab",
"language:kac",
"language:kam",
"language:kn",
"language:ks",
"language:kbp",
"language:kea",
"language:khk",
"language:ki",
"language:rw",
"language:ky",
"language:kmb",
"language:kmr",
"language:knc",
"language:kg",
"language:lo",
"language:lij",
"language:li",
"language:ln",
"language:lmo",
"language:ltg",
"language:lua",
"language:lg",
"language:luo",
"language:lus",
"language:lvs",
"language:mag",
"language:mai",
"language:mar",
"language:min",
"language:mni",
"language:mos",
"language:npi",
"language:nso",
"language:nus",
"language:ny",
"language:ory",
"language:pag",
"language:pa",
"language:pap",
"language:pbt",
"language:pes",
"language:plt",
"language:prs",
"language:quy",
"language:sg",
"language:sa",
"language:sat",
"language:scn",
"language:shn",
"language:si",
"language:sk",
"language:sm",
"language:sn",
"language:sd",
"language:so",
"language:st",
"language:sc",
"language:ss",
"language:su",
"language:swh",
"language:szl",
"language:taq",
"language:tg",
"language:ti",
"language:tpi",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:tzm",
"language:umb",
"language:uzn",
"language:vec",
"language:war",
"language:wo",
"language:xh",
"language:ydd",
"language:yo",
"language:yue",
"language:zsm",
"language:zu",
"license:apache-2.0",
"region:us"
] | Muennighoff | A multilingual collection of Winograd Schemas in six languages that can be used for evaluation of cross-lingual commonsense reasoning capabilities. | @misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 888 | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- af
- ar
- az
- be
- bg
- bn
- br
- bs
- ca
- ch
- cs
- cv
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fo
- fr
- fy
- ga
- gd
- gl
- gn
- he
- hi
- hr
- hu
- hy
- ia
- id
- ie
- io
- is
- it
- ja
- jv
- ka
- kk
- km
- ko
- ku
- kw
- la
- lb
- lt
- lv
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nb
- nl
- nn
- 'no'
- oc
- pl
- pt
- qu
- rn
- ro
- ru
- sh
- sl
- sq
- sr
- sv
- sw
- ta
- te
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- vo
- yi
- zh
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
programming_language:
- Java
- Python
- Jupyter-Notebook
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3x
size_categories:
- 100M<n<1B
task_categories:
- other
---
Can be loaded via e.g.:
```python
from datasets import load_dataset
d = load_dataset("Muennighoff/xP3x-sample", "apps")
```
1,000 rows from random languages and splits of xP3x for each of the multilingual datasets represented in [xP3x](https://huggingface.co/datasets/Muennighoff/xP3x). |
eugenesiow/Div2k | 2022-10-21T04:01:10.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:other",
"other-image-super-resolution",
"region:us"
] | eugenesiow | DIV2K dataset: DIVerse 2K resolution high quality images as used for the challenges @ NTIRE (CVPR 2017 and
CVPR 2018) and @ PIRM (ECCV 2018) | @InProceedings{Agustsson_2017_CVPR_Workshops,
author = {Agustsson, Eirikur and Timofte, Radu},
title = {NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
url = "http://www.vision.ee.ethz.ch/~timofter/publications/Agustsson-CVPRW-2017.pdf",
month = {July},
year = {2017}
} | null | 2 | 885 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language: []
license:
- other
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: Div2k
tags:
- other-image-super-resolution
---
# Dataset Card for Div2k
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://data.vision.ee.ethz.ch/cvl/DIV2K/
- **Repository**: https://huggingface.co/datasets/eugenesiow/Div2k
- **Paper**: http://www.vision.ee.ethz.ch/~timofter/publications/Agustsson-CVPRW-2017.pdf
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
DIV2K is a dataset of RGB images (2K resolution high quality images) with a large diversity of contents.
The DIV2K dataset is divided into:
- train data: starting from 800 high definition high resolution images we obtain corresponding low resolution images and provide both high and low resolution images for 2, 3, and 4 downscaling factors
- validation data: 100 high definition high resolution images are used for genereting low resolution corresponding images, the low res are provided from the beginning of the challenge and are meant for the participants to get online feedback from the validation server; the high resolution images will be released when the final phase of the challenge starts.
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for training and evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `train` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/DIV2K_valid_HR/0801.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/DIV2K_valid_LR_bicubic/X2/0801x2.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |train |validation|
|-------|-----:|---:|
|bicubic_x2|800|100|
|bicubic_x3|800|100|
|bicubic_x4|800|100|
|bicubic_x8|800|100|
|unknown_x2|800|100|
|unknown_x3|800|100|
|unknown_x4|800|100|
|realistic_mild_x4|800|100|
|realistic_difficult_x4|800|100|
|realistic_wild_x4|800|100|
## Dataset Creation
### Curation Rationale
Please refer to the [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) section.
### Source Data
#### Initial Data Collection and Normalization
**Resolution and quality**: All the images are 2K resolution, that is they have 2K pixels on at least one of
the axes (vertical or horizontal). All the images were processed using the same tools. For simplicity, since the most
common magnification factors in the recent SR literature are of ×2, ×3 and ×4 we cropped the images to multiple of
12 pixels on both axes. Most of the crawled images were originally above 20M pixels.
The images are of high quality both aesthetically and in the terms of small amounts of noise and other corruptions
(like blur and color shifts).
**Diversity**: The authors collected images from dozens of sites. A preference was made for sites with freely
shared high quality photography (such as https://www.pexels.com/ ). Note that we did not use images from Flickr,
Instagram, or other legally binding or copyright restricted images. We only seldom used keywords to assure the diversity
for our dataset. DIV2K covers a large diversity of contents, ranging from people, handmade objects and environments
(cities, villages), to flora and fauna, and natural sceneries including underwater and dim light conditions.
**Partitions**: After collecting the DIV2K 1000 images the authors computed image entropy, bit per pixel (bpp) PNG
compression rates and CORNIA scores (see Section 7.6) and applied bicubic downscaling ×3 and then upscaling ×3 with
bicubic interpolation (imresize Matlab function), ANR [47] and A+ [48] methods and default settings.
The authors randomly generated partitions of 800 train, 100 validation and 100 test images until they achieved a good
balance firstly in visual contents and then on the average entropy, average bpp, average number of pixels per
image (ppi), average CORNIA quality scores and also in the relative differences between the average PSNR scores of
bicubic, ANR and A+ methods.
Only the 800 train and 100 validation images are included in this dataset.
#### Who are the source language producers?
The authors manually crawled 1000 color RGB images from Internet paying special attention to the image quality,
to the diversity of sources (sites and cameras), to the image contents and to the copyrights.
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
All the images are collected from the Internet, and the copyright belongs to the original owners. If any of the images
belongs to you and you would like it removed, please kindly inform the authors, and they will remove it from the dataset
immediately.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Author**: [Radu Timofte](http://people.ee.ethz.ch/~timofter/)
### Licensing Information
Please notice that this dataset is made available for academic research purpose only. All the images are
collected from the Internet, and the copyright belongs to the original owners. If any of the images belongs to
you and you would like it removed, please kindly inform the authors, and they will remove it from the dataset
immediately.
### Citation Information
```bibtex
@InProceedings{Agustsson_2017_CVPR_Workshops,
author = {Agustsson, Eirikur and Timofte, Radu},
title = {NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
url = "http://www.vision.ee.ethz.ch/~timofter/publications/Agustsson-CVPRW-2017.pdf",
month = {July},
year = {2017}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
|
visheratin/laion-coco-nllb | 2023-09-20T04:00:48.000Z | [
"task_categories:image-to-text",
"task_categories:translation",
"size_categories:100K<n<1M",
"language:ace",
"language:acm",
"language:acq",
"language:aeb",
"language:af",
"language:ajp",
"language:ak",
"language:als",
"language:am",
"language:apc",
"language:ar",
"language:ars",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:awa",
"language:ayr",
"language:azb",
"language:azj",
"language:ba",
"language:bm",
"language:ban",
"language:be",
"language:bem",
"language:bn",
"language:bho",
"language:bjn",
"language:bo",
"language:bs",
"language:bug",
"language:bg",
"language:ca",
"language:ceb",
"language:cs",
"language:cjk",
"language:ckb",
"language:crh",
"language:cy",
"language:da",
"language:de",
"language:dik",
"language:dyu",
"language:dz",
"language:el",
"language:en",
"language:eo",
"language:et",
"language:eu",
"language:ee",
"language:fo",
"language:fj",
"language:fi",
"language:fon",
"language:fr",
"language:fur",
"language:fuv",
"language:gaz",
"language:gd",
"language:ga",
"language:gl",
"language:gn",
"language:gu",
"language:ht",
"language:ha",
"language:he",
"language:hi",
"language:hne",
"language:hr",
"language:hu",
"language:hy",
"language:ig",
"language:ilo",
"language:id",
"language:is",
"language:it",
"language:jv",
"language:ja",
"language:kab",
"language:kac",
"language:kam",
"language:kn",
"language:ks",
"language:ka",
"language:kk",
"language:kbp",
"language:kea",
"language:khk",
"language:km",
"language:ki",
"language:rw",
"language:ky",
"language:kmb",
"language:kmr",
"language:knc",
"language:kg",
"language:ko",
"language:lo",
"language:lij",
"language:li",
"language:ln",
"language:lt",
"language:lmo",
"language:ltg",
"language:lb",
"language:lua",
"language:lg",
"language:luo",
"language:lus",
"language:lvs",
"language:mag",
"language:mai",
"language:ml",
"language:mar",
"language:min",
"language:mk",
"language:mt",
"language:mni",
"language:mos",
"language:mi",
"language:my",
"language:nl",
"language:nn",
"language:nb",
"language:npi",
"language:nso",
"language:nus",
"language:ny",
"language:oc",
"language:ory",
"language:pag",
"language:pa",
"language:pap",
"language:pbt",
"language:pes",
"language:plt",
"language:pl",
"language:pt",
"language:prs",
"language:quy",
"language:ro",
"language:rn",
"language:ru",
"language:sg",
"language:sa",
"language:sat",
"language:scn",
"language:shn",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:sd",
"language:so",
"language:st",
"language:es",
"language:sc",
"language:sr",
"language:ss",
"language:su",
"language:sv",
"language:swh",
"language:szl",
"language:ta",
"language:taq",
"language:tt",
"language:te",
"language:tg",
"language:tl",
"language:th",
"language:ti",
"language:tpi",
"language:tn",
"language:ts",
"language:tk",
"language:tum",
"language:tr",
"language:tw",
"language:tzm",
"language:ug",
"language:uk",
"language:umb",
"language:ur",
"language:uzn",
"language:vec",
"language:vi",
"language:war",
"language:wo",
"language:xh",
"language:ydd",
"language:yo",
"language:yue",
"language:zh",
"language:zsm",
"language:zu",
"license:cc-by-nc-4.0",
"arxiv:2309.01859",
"doi:10.57967/hf/1006",
"region:us"
] | visheratin | null | null | null | 12 | 885 | ---
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
license: cc-by-nc-4.0
size_categories:
- 100K<n<1M
task_categories:
- image-to-text
- translation
pretty_name: LAION-COCO translated to 200 languages
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: eng_caption
dtype: string
- name: captions
sequence:
sequence: string
- name: score
dtype: float64
splits:
- name: test
num_bytes: 289987047
num_examples: 15937
- name: train
num_bytes: 3659435447
num_examples: 200687
download_size: 2512641787
dataset_size: 3949422494
language_details: ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab,
aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng,
ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl,
bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn,
bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn,
dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn,
est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn,
fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr,
hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn,
ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn,
kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn,
kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn,
kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn,
lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn,
mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn,
mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn,
nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya,
pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn,
ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr,
sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn,
spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn,
szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi,
taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn,
twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn,
vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans,
zho_Hant, zul_Latn
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
# LAION COCO translated into 200 languages
This dataset contains the samples of the [LAION-COCO](https://huggingface.co/datasets/laion/laion-coco) dataset translated to 200 languages using
the largest [NLLB-200 model](https://huggingface.co/facebook/nllb-200-3.3B) (3.3B parameters).
## Fields description
1. `id` - unique ID of the image.
2. `url` - original URL of the image from the LAION-COCO dataset.
3. `eng_caption` - original English caption from the LAION-COCO dataset.
4. `captions` - a list of captions translated to the languages from the Flores 200 dataset. Every item in the list is a list where the first element is a BCP-47 language code, and the second one is a caption in this language. The list of all language codes for the Flores 200 dataset can be found [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200).
5. `score` - aesthetic score generated using [LAION aesthetic predictor](https://github.com/christophschuhmann/improved-aesthetic-predictor/). The images in the dataset have the score of 4.5+.
## Images
The dataset was filtered to contain only working image URLs. However, the availability may change in the future. Because of that, all images from this dataset are available at [https://nllb-data.com/](https://nllb-data.com/).
To get the image, use the following format:
```
https://nllb-data.com/{id}.jpg
```
## Paper
The dataset was used to train the models in the paper: "[NLLB-CLIP - train performant multilingual image retrieval model on a budget](https://arxiv.org/abs/2309.01859)". |
WizardLM/WizardLM_evol_instruct_70k | 2023-08-24T03:59:32.000Z | [
"arxiv:2308.09583",
"arxiv:2304.12244",
"arxiv:2306.08568",
"region:us"
] | WizardLM | null | null | null | 106 | 882 | This is the training data of WizardLM.
## News
- 🔥 🔥 🔥 [08/11/2023] We release **WizardMath** Models.
- 🔥 Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
| <sup>WizardCoder-15B-V1.0</sup> | <sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | || |<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> |
</font> |
mariosasko/test_multi_dir_dataset | 2022-02-25T17:58:58.000Z | [
"region:us"
] | mariosasko | null | null | null | 0 | 879 | Entry not found |
tasksource/oasst1_pairwise_rlhf_reward | 2023-07-04T17:47:46.000Z | [
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",
"language:nl",
"language:hu",
"language:eu",
"language:zh",
"language:eo",
"language:ja",
"language:ca",
"language:cs",
"language:bg",
"language:fi",
"language:pt",
"language:tr",
"language:ro",
"language:ar",
"language:uk",
"language:gl",
"language:fr",
"language:ko",
"region:us"
] | tasksource | null | null | null | 19 | 877 | ---
dataset_info:
features:
- name: lang
dtype: string
- name: parent_id
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 40736437
num_examples: 17966
- name: validation
num_bytes: 2152443
num_examples: 952
download_size: 22371458
dataset_size: 42888880
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
---
# Dataset Card for "oasst1_pairwise_rlhf_reward"
[OASST1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1) preprocessed for reward modeling:
```python
import pandas as pd
from datasets import load_dataset,concatenate_datasets, Dataset, DatasetDict
import numpy as np
dataset = load_dataset("OpenAssistant/oasst1")
df=concatenate_datasets(list(dataset.values())).to_pandas()
m2t=df.set_index("message_id")['text'].to_dict()
m2r=df.set_index("message_id")['role'].to_dict()
m2p=df.set_index('message_id')['parent_id'].to_dict()
m2history=dict() # message id to unrolled history
for k,v in m2p.items():
history=[k]
while history[-1] in m2p:
history+=[m2p[history[-1]]]
m2history[k]="\n".join([f"{m2r[m]}: {m2t[m]}" for m in history[::-1] if m])
d=dict()
for split in "train","validation":
df=dataset[split].to_pandas()
df['prompt']=df.parent_id.map(lambda x: m2history.get(x,''))
df=df[~df['rank'].isna()]
def agg(x):
x=list(x)
return [x[0],x[-1]]
df=df.groupby(['prompt',"parent_id",'lang'])[['text','rank']].agg(agg).reset_index()
df=df[df['rank'].map(lambda x:len(set(x))>1)]
df['chosen'] = df.apply(lambda x:x['text'][np.argmin(x['rank'])],axis=1)
df['rejected'] = df.apply(lambda x:x['text'][np.argmax(x['rank'])],axis=1)
d[split]=Dataset.from_pandas(df[['lang','parent_id','prompt','chosen','rejected']],preserve_index=False)
DatasetDict(d).push_to_hub('tasksource/oasst1_pairwise_rlhf_reward')
``` |
empathetic_dialogues | 2023-04-05T10:05:17.000Z | [
"task_categories:conversational",
"task_categories:question-answering",
"task_ids:dialogue-generation",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:1811.00207",
"region:us"
] | null | PyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset | @inproceedings{rashkin2019towards,
title = {Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset},
author = {Hannah Rashkin and Eric Michael Smith and Margaret Li and Y-Lan Boureau},
booktitle = {ACL},
year = {2019},
} | null | 52 | 874 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: EmpatheticDialogues
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- conversational
- question-answering
task_ids:
- dialogue-generation
- open-domain-qa
paperswithcode_id: empatheticdialogues
dataset_info:
features:
- name: conv_id
dtype: string
- name: utterance_idx
dtype: int32
- name: context
dtype: string
- name: prompt
dtype: string
- name: speaker_idx
dtype: int32
- name: utterance
dtype: string
- name: selfeval
dtype: string
- name: tags
dtype: string
splits:
- name: test
num_bytes: 3011332
num_examples: 10943
- name: train
num_bytes: 19040509
num_examples: 76673
- name: validation
num_bytes: 3077481
num_examples: 12030
download_size: 28022709
dataset_size: 25129322
---
# Dataset Card for "empathetic_dialogues"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/facebookresearch/EmpatheticDialogues](https://github.com/facebookresearch/EmpatheticDialogues)
- **Repository:** https://github.com/facebookresearch/EmpatheticDialogues
- **Paper:** [Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset](https://arxiv.org/abs/1811.00207)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 28.02 MB
- **Size of the generated dataset:** 25.13 MB
- **Total amount of disk used:** 53.15 MB
### Dataset Summary
PyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 28.02 MB
- **Size of the generated dataset:** 25.13 MB
- **Total amount of disk used:** 53.15 MB
An example of 'train' looks as follows.
```
{
"context": "sentimental",
"conv_id": "hit:0_conv:1",
"prompt": "I remember going to the fireworks with my best friend. There was a lot of people_comma_ but it only felt like us in the world.",
"selfeval": "5|5|5_2|2|5",
"speaker_idx": 1,
"tags": "",
"utterance": "I remember going to see the fireworks with my best friend. It was the first time we ever spent time alone together. Although there was a lot of people_comma_ we felt like the only people in the world.",
"utterance_idx": 1
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `conv_id`: a `string` feature.
- `utterance_idx`: a `int32` feature.
- `context`: a `string` feature.
- `prompt`: a `string` feature.
- `speaker_idx`: a `int32` feature.
- `utterance`: a `string` feature.
- `selfeval`: a `string` feature.
- `tags`: a `string` feature.
### Data Splits
| name |train|validation|test |
|-------|----:|---------:|----:|
|default|76673| 12030|10943|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Creative Commons [Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
```
@inproceedings{rashkin-etal-2019-towards,
title = "Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset",
author = "Rashkin, Hannah and
Smith, Eric Michael and
Li, Margaret and
Boureau, Y-Lan",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1534",
doi = "10.18653/v1/P19-1534",
pages = "5370--5381",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
facebook/multilingual_librispeech | 2023-02-13T11:33:31.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:nl",
"language:fr",
"language:it",
"language:es",
"language:pt",
"language:pl",
"license:cc-by-4.0",
"arxiv:2012.03411",
"region:us"
] | facebook | This is a streamable version of the Multilingual LibriSpeech (MLS) dataset.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94)
to make it easier to stream.
MLS dataset is a large multilingual corpus suitable for speech research.
The dataset is derived from read audiobooks from LibriVox and consists of 8 languages:
English, German, Dutch, Spanish, French, Italian, Portuguese, Polish. | @article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
} | null | 27 | 865 | ---
pretty_name: MultiLingual LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- de
- nl
- fr
- it
- es
- pt
- pl
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: multilingual-librispeech
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
---
# Dataset Card for MultiLingual LibriSpeech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](http://www.openslr.org/94)
- **Repository:** [Needs More Information]
- **Paper:** [MLS: A Large-Scale Multilingual Dataset for Speech Research](https://arxiv.org/abs/2012.03411)
- **Leaderboard:** [🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=facebook%2Fmultilingual_librispeech&only_verified=0&task=automatic-speech-recognition&config=-unspecified-&split=-unspecified-&metric=wer)
### Dataset Summary
This is a streamable version of the Multilingual LibriSpeech (MLS) dataset.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
### Languages
The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
```python
from datasets import load_dataset
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
print(next(iter(mls)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
Local:
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
```
Streaming:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
dataloader = DataLoader(mls, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on MultiLingual Librispeech with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'file': '10900_6473_000030.flac',
'audio': {'path': '10900_6473_000030.flac',
'array': array([-1.52587891e-04, 6.10351562e-05, 0.00000000e+00, ...,
4.27246094e-04, 5.49316406e-04, 4.57763672e-04]),
'sampling_rate': 16000},
'text': 'więc czego chcecie odemnie spytałem wysłuchawszy tego zadziwiającego opowiadania broń nas stary człowieku broń zakrzyknęli równocześnie obaj posłowie\n',
'speaker_id': 10900,
'chapter_id': 6473,
'id': '10900_6473_000030'}
```
### Data Fields
- file: A filename .flac format.
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
| | Train | Train.9h | Train.1h | Dev | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| german | 469942 | 2194 | 241 | 3469 | 3394 |
| dutch | 374287 | 2153 | 234 | 3095 | 3075 |
| french | 258213 | 2167 | 241 | 2416 | 2426 |
| spanish | 220701 | 2110 | 233 | 2408 | 2385 |
| italian | 59623 | 2173 | 240 | 1248 | 1262 |
| portuguese | 37533 | 2116 | 236 | 826 | 871 |
| polish | 25043 | 2173 | 238 | 512 | 520 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
### Citation Information
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten)
and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
lj_speech | 2022-11-03T16:16:34.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unlicense",
"region:us"
] | null | This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading
passages from 7 non-fiction books in English. A transcription is provided for each clip. Clips vary in length
from 1 to 10 seconds and have a total length of approximately 24 hours.
Note that in order to limit the required storage for preparing this dataset, the audio
is stored in the .wav format and is not converted to a float32 array. To convert the audio
file to a float32 array, please make use of the `.map()` function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @misc{ljspeech17,
author = {Keith Ito and Linda Johnson},
title = {The LJ Speech Dataset},
howpublished = {\\url{https://keithito.com/LJ-Speech-Dataset/}},
year = 2017
} | null | 8 | 863 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unlicense
multilinguality:
- monolingual
paperswithcode_id: ljspeech
pretty_name: LJ Speech
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
train-eval-index:
- config: main
task: automatic-speech-recognition
task_id: speech_recognition
splits:
train_split: train
col_mapping:
file: path
text: text
metrics:
- type: wer
name: WER
- type: cer
name: CER
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: file
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
config_name: main
splits:
- name: train
num_bytes: 4667022
num_examples: 13100
download_size: 2748572632
dataset_size: 4667022
---
# Dataset Card for lj_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The LJ Speech Dataset](https://keithito.com/LJ-Speech-Dataset/)
- **Repository:** [N/A]
- **Paper:** [N/A]
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/text-to-speech-synthesis-on-ljspeech)
- **Point of Contact:** [Keith Ito](mailto:kito@kito.us)
### Dataset Summary
This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books in English. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Automatic Speech Recognition (ASR) or Text-to-Speech (TTS).
- `other:automatic-speech-recognition`: An ASR model is presented with an audio file and asked to transcribe the audio file to written text.
The most common ASR evaluation metric is the word error rate (WER).
- `other:text-to-speech`: A TTS model is given a written text in natural language and asked to generate a speech audio file.
A reasonable evaluation metric is the mean opinion score (MOS) of audio quality.
The dataset has an active leaderboard which can be found at https://paperswithcode.com/sota/text-to-speech-synthesis-on-ljspeech
### Languages
The transcriptions and audio are in English.
## Dataset Structure
### Data Instances
A data point comprises the path to the audio file, called `file` and its transcription, called `text`.
A normalized version of the text is also provided.
```
{
'id': 'LJ002-0026',
'file': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
'audio': {'path': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 22050},
'text': 'in the three years between 1813 and 1816,'
'normalized_text': 'in the three years between eighteen thirteen and eighteen sixteen,',
}
```
Each audio file is a single-channel 16-bit PCM WAV with a sample rate of 22050 Hz.
### Data Fields
- id: unique id of the data sample.
- file: a path to the downloaded audio file in .wav format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- normalized_text: the transcription with numbers, ordinals, and monetary units expanded into full words.
### Data Splits
The dataset is not pre-split. Some statistics:
- Total Clips: 13,100
- Total Words: 225,715
- Total Characters: 1,308,678
- Total Duration: 23:55:17
- Mean Clip Duration: 6.57 sec
- Min Clip Duration: 1.11 sec
- Max Clip Duration: 10.10 sec
- Mean Words per Clip: 17.23
- Distinct Words: 13,821
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
This dataset consists of excerpts from the following works:
- Morris, William, et al. Arts and Crafts Essays. 1893.
- Griffiths, Arthur. The Chronicles of Newgate, Vol. 2. 1884.
- Roosevelt, Franklin D. The Fireside Chats of Franklin Delano Roosevelt. 1933-42.
- Harland, Marion. Marion Harland's Cookery for Beginners. 1893.
- Rolt-Wheeler, Francis. The Science - History of the Universe, Vol. 5: Biology. 1910.
- Banks, Edgar J. The Seven Wonders of the Ancient World. 1916.
- President's Commission on the Assassination of President Kennedy. Report of the President's Commission on the Assassination of President Kennedy. 1964.
Some details about normalization:
- The normalized transcription has the numbers, ordinals, and monetary units expanded into full words (UTF-8)
- 19 of the transcriptions contain non-ASCII characters (for example, LJ016-0257 contains "raison d'être").
- The following abbreviations appear in the text. They may be expanded as follows:
| Abbreviation | Expansion |
|--------------|-----------|
| Mr. | Mister |
| Mrs. | Misess (*) |
| Dr. | Doctor |
| No. | Number |
| St. | Saint |
| Co. | Company |
| Jr. | Junior |
| Maj. | Major |
| Gen. | General |
| Drs. | Doctors |
| Rev. | Reverend |
| Lt. | Lieutenant |
| Hon. | Honorable |
| Sgt. | Sergeant |
| Capt. | Captain |
| Esq. | Esquire |
| Ltd. | Limited |
| Col. | Colonel |
| Ft. | Fort |
(*) there's no standard expansion for "Mrs."
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
- The audio clips range in length from approximately 1 second to 10 seconds. They were segmented automatically based on silences in the recording. Clip boundaries generally align with sentence or clause boundaries, but not always.
- The text was matched to the audio manually, and a QA pass was done to ensure that the text accurately matched the words spoken in the audio.
#### Who are the annotators?
Recordings by Linda Johnson from LibriVox. Alignment and annotation by Keith Ito.
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
- The original LibriVox recordings were distributed as 128 kbps MP3 files. As a result, they may contain artifacts introduced by the MP3 encoding.
## Additional Information
### Dataset Curators
The dataset was initially created by Keith Ito and Linda Johnson.
### Licensing Information
Public Domain ([LibriVox](https://librivox.org/pages/public-domain/))
### Citation Information
```
@misc{ljspeech17,
author = {Keith Ito and Linda Johnson},
title = {The LJ Speech Dataset},
howpublished = {\url{https://keithito.com/LJ-Speech-Dataset/}},
year = 2017
}
```
### Contributions
Thanks to [@anton-l](https://github.com/anton-l) for adding this dataset. |
Multimodal-Fatima/StanfordCars_train | 2023-06-12T06:26:48.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 0 | 862 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': am general hummer suv 2000
'1': acura rl sedan 2012
'2': acura tl sedan 2012
'3': acura tl type-s 2008
'4': acura tsx sedan 2012
'5': acura integra type r 2001
'6': acura zdx hatchback 2012
'7': aston martin v8 vantage convertible 2012
'8': aston martin v8 vantage coupe 2012
'9': aston martin virage convertible 2012
'10': aston martin virage coupe 2012
'11': audi rs 4 convertible 2008
'12': audi a5 coupe 2012
'13': audi tts coupe 2012
'14': audi r8 coupe 2012
'15': audi v8 sedan 1994
'16': audi 100 sedan 1994
'17': audi 100 wagon 1994
'18': audi tt hatchback 2011
'19': audi s6 sedan 2011
'20': audi s5 convertible 2012
'21': audi s5 coupe 2012
'22': audi s4 sedan 2012
'23': audi s4 sedan 2007
'24': audi tt rs coupe 2012
'25': bmw activehybrid 5 sedan 2012
'26': bmw 1 series convertible 2012
'27': bmw 1 series coupe 2012
'28': bmw 3 series sedan 2012
'29': bmw 3 series wagon 2012
'30': bmw 6 series convertible 2007
'31': bmw x5 suv 2007
'32': bmw x6 suv 2012
'33': bmw m3 coupe 2012
'34': bmw m5 sedan 2010
'35': bmw m6 convertible 2010
'36': bmw x3 suv 2012
'37': bmw z4 convertible 2012
'38': bentley continental supersports conv. convertible 2012
'39': bentley arnage sedan 2009
'40': bentley mulsanne sedan 2011
'41': bentley continental gt coupe 2012
'42': bentley continental gt coupe 2007
'43': bentley continental flying spur sedan 2007
'44': bugatti veyron 16.4 convertible 2009
'45': bugatti veyron 16.4 coupe 2009
'46': buick regal gs 2012
'47': buick rainier suv 2007
'48': buick verano sedan 2012
'49': buick enclave suv 2012
'50': cadillac cts-v sedan 2012
'51': cadillac srx suv 2012
'52': cadillac escalade ext crew cab 2007
'53': chevrolet silverado 1500 hybrid crew cab 2012
'54': chevrolet corvette convertible 2012
'55': chevrolet corvette zr1 2012
'56': chevrolet corvette ron fellows edition z06 2007
'57': chevrolet traverse suv 2012
'58': chevrolet camaro convertible 2012
'59': chevrolet hhr ss 2010
'60': chevrolet impala sedan 2007
'61': chevrolet tahoe hybrid suv 2012
'62': chevrolet sonic sedan 2012
'63': chevrolet express cargo van 2007
'64': chevrolet avalanche crew cab 2012
'65': chevrolet cobalt ss 2010
'66': chevrolet malibu hybrid sedan 2010
'67': chevrolet trailblazer ss 2009
'68': chevrolet silverado 2500hd regular cab 2012
'69': chevrolet silverado 1500 classic extended cab 2007
'70': chevrolet express van 2007
'71': chevrolet monte carlo coupe 2007
'72': chevrolet malibu sedan 2007
'73': chevrolet silverado 1500 extended cab 2012
'74': chevrolet silverado 1500 regular cab 2012
'75': chrysler aspen suv 2009
'76': chrysler sebring convertible 2010
'77': chrysler town and country minivan 2012
'78': chrysler 300 srt-8 2010
'79': chrysler crossfire convertible 2008
'80': chrysler pt cruiser convertible 2008
'81': daewoo nubira wagon 2002
'82': dodge caliber wagon 2012
'83': dodge caliber wagon 2007
'84': dodge caravan minivan 1997
'85': dodge ram pickup 3500 crew cab 2010
'86': dodge ram pickup 3500 quad cab 2009
'87': dodge sprinter cargo van 2009
'88': dodge journey suv 2012
'89': dodge dakota crew cab 2010
'90': dodge dakota club cab 2007
'91': dodge magnum wagon 2008
'92': dodge challenger srt8 2011
'93': dodge durango suv 2012
'94': dodge durango suv 2007
'95': dodge charger sedan 2012
'96': dodge charger srt-8 2009
'97': eagle talon hatchback 1998
'98': fiat 500 abarth 2012
'99': fiat 500 convertible 2012
'100': ferrari ff coupe 2012
'101': ferrari california convertible 2012
'102': ferrari 458 italia convertible 2012
'103': ferrari 458 italia coupe 2012
'104': fisker karma sedan 2012
'105': ford f-450 super duty crew cab 2012
'106': ford mustang convertible 2007
'107': ford freestar minivan 2007
'108': ford expedition el suv 2009
'109': ford edge suv 2012
'110': ford ranger supercab 2011
'111': ford gt coupe 2006
'112': ford f-150 regular cab 2012
'113': ford f-150 regular cab 2007
'114': ford focus sedan 2007
'115': ford e-series wagon van 2012
'116': ford fiesta sedan 2012
'117': gmc terrain suv 2012
'118': gmc savana van 2012
'119': gmc yukon hybrid suv 2012
'120': gmc acadia suv 2012
'121': gmc canyon extended cab 2012
'122': geo metro convertible 1993
'123': hummer h3t crew cab 2010
'124': hummer h2 sut crew cab 2009
'125': honda odyssey minivan 2012
'126': honda odyssey minivan 2007
'127': honda accord coupe 2012
'128': honda accord sedan 2012
'129': hyundai veloster hatchback 2012
'130': hyundai santa fe suv 2012
'131': hyundai tucson suv 2012
'132': hyundai veracruz suv 2012
'133': hyundai sonata hybrid sedan 2012
'134': hyundai elantra sedan 2007
'135': hyundai accent sedan 2012
'136': hyundai genesis sedan 2012
'137': hyundai sonata sedan 2012
'138': hyundai elantra touring hatchback 2012
'139': hyundai azera sedan 2012
'140': infiniti g coupe ipl 2012
'141': infiniti qx56 suv 2011
'142': isuzu ascender suv 2008
'143': jaguar xk xkr 2012
'144': jeep patriot suv 2012
'145': jeep wrangler suv 2012
'146': jeep liberty suv 2012
'147': jeep grand cherokee suv 2012
'148': jeep compass suv 2012
'149': lamborghini reventon coupe 2008
'150': lamborghini aventador coupe 2012
'151': lamborghini gallardo lp 570-4 superleggera 2012
'152': lamborghini diablo coupe 2001
'153': land rover range rover suv 2012
'154': land rover lr2 suv 2012
'155': lincoln town car sedan 2011
'156': mini cooper roadster convertible 2012
'157': maybach landaulet convertible 2012
'158': mazda tribute suv 2011
'159': mclaren mp4-12c coupe 2012
'160': mercedes-benz 300-class convertible 1993
'161': mercedes-benz c-class sedan 2012
'162': mercedes-benz sl-class coupe 2009
'163': mercedes-benz e-class sedan 2012
'164': mercedes-benz s-class sedan 2012
'165': mercedes-benz sprinter van 2012
'166': mitsubishi lancer sedan 2012
'167': nissan leaf hatchback 2012
'168': nissan nv passenger van 2012
'169': nissan juke hatchback 2012
'170': nissan 240sx coupe 1998
'171': plymouth neon coupe 1999
'172': porsche panamera sedan 2012
'173': ram c/v cargo van minivan 2012
'174': rolls-royce phantom drophead coupe convertible 2012
'175': rolls-royce ghost sedan 2012
'176': rolls-royce phantom sedan 2012
'177': scion xd hatchback 2012
'178': spyker c8 convertible 2009
'179': spyker c8 coupe 2009
'180': suzuki aerio sedan 2007
'181': suzuki kizashi sedan 2012
'182': suzuki sx4 hatchback 2012
'183': suzuki sx4 sedan 2012
'184': tesla model s sedan 2012
'185': toyota sequoia suv 2012
'186': toyota camry sedan 2012
'187': toyota corolla sedan 2012
'188': toyota 4runner suv 2012
'189': volkswagen golf hatchback 2012
'190': volkswagen golf hatchback 1991
'191': volkswagen beetle hatchback 2012
'192': volvo c30 hatchback 2012
'193': volvo 240 sedan 1993
'194': volvo xc90 suv 2007
'195': smart fortwo convertible 2012
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_stanfordcars
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
splits:
- name: train
num_bytes: 1016273762.0
num_examples: 8144
download_size: 991440998
dataset_size: 1016273762.0
---
# Dataset Card for "StanfordCars_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
natural_questions | 2023-04-05T13:35:01.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | null | The NQ corpus contains questions from real users, and it requires QA systems to
read and comprehend an entire Wikipedia article that may or may not contain the
answer to the question. The inclusion of real user questions, and the
requirement that solutions should read an entire page to find the answer, cause
NQ to be a more realistic and challenging task than prior QA datasets. | @article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {Transactions of the Association of Computational Linguistics}
} | null | 21 | 861 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: natural-questions
pretty_name: Natural Questions
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: title
dtype: string
- name: url
dtype: string
- name: html
dtype: string
- name: tokens
sequence:
- name: token
dtype: string
- name: is_html
dtype: bool
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: start_token
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: end_byte
dtype: int64
- name: short_answers
sequence:
- name: start_token
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: end_byte
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
- name: long_answer_candidates
sequence:
- name: start_token
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: end_byte
dtype: int64
- name: top_label
dtype: bool
splits:
- name: train
num_bytes: 97445142568
num_examples: 307373
- name: validation
num_bytes: 2353975312
num_examples: 7830
download_size: 45069199013
dataset_size: 99799117880
---
# Dataset Card for Natural Questions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://ai.google.com/research/NaturalQuestions/dataset](https://ai.google.com/research/NaturalQuestions/dataset)
- **Repository:** [https://github.com/google-research-datasets/natural-questions](https://github.com/google-research-datasets/natural-questions)
- **Paper:** [https://research.google/pubs/pub47761/](https://research.google/pubs/pub47761/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 45.07 GB
- **Size of the generated dataset:** 99.80 GB
- **Total amount of disk used:** 144.87 GB
### Dataset Summary
The NQ corpus contains questions from real users, and it requires QA systems to
read and comprehend an entire Wikipedia article that may or may not contain the
answer to the question. The inclusion of real user questions, and the
requirement that solutions should read an entire page to find the answer, cause
NQ to be a more realistic and challenging task than prior QA datasets.
### Supported Tasks and Leaderboards
[https://ai.google.com/research/NaturalQuestions](https://ai.google.com/research/NaturalQuestions)
### Languages
en
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 45.07 GB
- **Size of the generated dataset:** 99.80 GB
- **Total amount of disk used:** 144.87 GB
An example of 'train' looks as follows. This is a toy example.
```
{
"id": "797803103760793766",
"document": {
"title": "Google",
"url": "http://www.wikipedia.org/Google",
"html": "<html><body><h1>Google Inc.</h1><p>Google was founded in 1998 By:<ul><li>Larry</li><li>Sergey</li></ul></p></body></html>",
"tokens":[
{"token": "<h1>", "start_byte": 12, "end_byte": 16, "is_html": True},
{"token": "Google", "start_byte": 16, "end_byte": 22, "is_html": False},
{"token": "inc", "start_byte": 23, "end_byte": 26, "is_html": False},
{"token": ".", "start_byte": 26, "end_byte": 27, "is_html": False},
{"token": "</h1>", "start_byte": 27, "end_byte": 32, "is_html": True},
{"token": "<p>", "start_byte": 32, "end_byte": 35, "is_html": True},
{"token": "Google", "start_byte": 35, "end_byte": 41, "is_html": False},
{"token": "was", "start_byte": 42, "end_byte": 45, "is_html": False},
{"token": "founded", "start_byte": 46, "end_byte": 53, "is_html": False},
{"token": "in", "start_byte": 54, "end_byte": 56, "is_html": False},
{"token": "1998", "start_byte": 57, "end_byte": 61, "is_html": False},
{"token": "by", "start_byte": 62, "end_byte": 64, "is_html": False},
{"token": ":", "start_byte": 64, "end_byte": 65, "is_html": False},
{"token": "<ul>", "start_byte": 65, "end_byte": 69, "is_html": True},
{"token": "<li>", "start_byte": 69, "end_byte": 73, "is_html": True},
{"token": "Larry", "start_byte": 73, "end_byte": 78, "is_html": False},
{"token": "</li>", "start_byte": 78, "end_byte": 83, "is_html": True},
{"token": "<li>", "start_byte": 83, "end_byte": 87, "is_html": True},
{"token": "Sergey", "start_byte": 87, "end_byte": 92, "is_html": False},
{"token": "</li>", "start_byte": 92, "end_byte": 97, "is_html": True},
{"token": "</ul>", "start_byte": 97, "end_byte": 102, "is_html": True},
{"token": "</p>", "start_byte": 102, "end_byte": 106, "is_html": True}
],
},
"question" :{
"text": "who founded google",
"tokens": ["who", "founded", "google"]
},
"long_answer_candidates": [
{"start_byte": 32, "end_byte": 106, "start_token": 5, "end_token": 22, "top_level": True},
{"start_byte": 65, "end_byte": 102, "start_token": 13, "end_token": 21, "top_level": False},
{"start_byte": 69, "end_byte": 83, "start_token": 14, "end_token": 17, "top_level": False},
{"start_byte": 83, "end_byte": 92, "start_token": 17, "end_token": 20 , "top_level": False}
],
"annotations": [{
"id": "6782080525527814293",
"long_answer": {"start_byte": 32, "end_byte": 106, "start_token": 5, "end_token": 22, "candidate_index": 0},
"short_answers": [
{"start_byte": 73, "end_byte": 78, "start_token": 15, "end_token": 16, "text": "Larry"},
{"start_byte": 87, "end_byte": 92, "start_token": 18, "end_token": 19, "text": "Sergey"}
],
"yes_no_answer": -1
}]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `document` a dictionary feature containing:
- `title`: a `string` feature.
- `url`: a `string` feature.
- `html`: a `string` feature.
- `tokens`: a dictionary feature containing:
- `token`: a `string` feature.
- `is_html`: a `bool` feature.
- `start_byte`: a `int64` feature.
- `end_byte`: a `int64` feature.
- `question`: a dictionary feature containing:
- `text`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `long_answer_candidates`: a dictionary feature containing:
- `start_token`: a `int64` feature.
- `end_token`: a `int64` feature.
- `start_byte`: a `int64` feature.
- `end_byte`: a `int64` feature.
- `top_level`: a `bool` feature.
- `annotations`: a dictionary feature containing:
- `id`: a `string` feature.
- `long_answers`: a dictionary feature containing:
- `start_token`: a `int64` feature.
- `end_token`: a `int64` feature.
- `start_byte`: a `int64` feature.
- `end_byte`: a `int64` feature.
- `candidate_index`: a `int64` feature.
- `short_answers`: a dictionary feature containing:
- `start_token`: a `int64` feature.
- `end_token`: a `int64` feature.
- `start_byte`: a `int64` feature.
- `end_byte`: a `int64` feature.
- `text`: a `string` feature.
- `yes_no_answer`: a classification label, with possible values including `NO` (0), `YES` (1).
### Data Splits
| name | train | validation |
|---------|-------:|-----------:|
| default | 307373 | 7830 |
| dev | N/A | 7830 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Creative Commons Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
```
@article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {Transactions of the Association of Computational Linguistics}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
bigcode/the-stack-smol-xl | 2023-02-10T17:22:38.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"region:us"
] | bigcode | null | null | null | 3 | 861 | ---
annotations_creators: []
language_creators:
- crowdsourced
language: ["code"]
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
## Dataset Description
A small subset of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, with 87 programming languages, each has 10,000 random samples from the original dataset.
## Languages
The dataset contains 87 programming languages:
````
'ada', 'agda', 'alloy', 'antlr', 'applescript', 'assembly', 'augeas', 'awk', 'batchfile', 'bison', 'bluespec', 'c',
'c++', 'c-sharp', 'clojure', 'cmake', 'coffeescript', 'common-lisp', 'css', 'cuda', 'dart', 'dockerfile', 'elixir',
'elm', 'emacs-lisp','erlang', 'f-sharp', 'fortran', 'glsl', 'go', 'groovy', 'haskell','html', 'idris', 'isabelle', 'java',
'java-server-pages', 'javascript', 'julia', 'kotlin', 'lean', 'literate-agda', 'literate-coffeescript', 'literate-haskell',
'lua', 'makefile', 'maple', 'markdown', 'mathematica', 'matlab', 'ocaml', 'pascal', 'perl', 'php', 'powershell', 'prolog',
'protocol-buffer', 'python', 'r', 'racket', 'restructuredtext', 'rmarkdown', 'ruby', 'rust', 'sas', 'scala', 'scheme',
'shell', 'smalltalk', 'solidity', 'sparql', 'sql', 'stan', 'standard-ml', 'stata', 'systemverilog', 'tcl', 'tcsh', 'tex',
'thrift', 'typescript', 'verilog', 'vhdl', 'visual-basic', 'xslt', 'yacc', 'zig'
`````
## Dataset Structure
```python
# to load go:
from datasets import load_dataset
load_dataset("bigcode/the-stack-smol-xl", data_dir="data/go")
```
|
DFKI-SLT/cross_ner | 2023-01-19T09:17:38.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2003",
"language:en",
"cross domain",
"ai",
"news",
"music",
"literature",
"politics",
"science",
"arxiv:2012.04373",
"region:us"
] | DFKI-SLT | CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains
(Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specialized entity categories for
different domains. Additionally, CrossNER also includes unlabeled domain-related corpora for the corresponding five
domains.
For details, see the paper:
[CrossNER: Evaluating Cross-Domain Named Entity Recognition](https://arxiv.org/abs/2012.04373) | @article{liu2020crossner,
title={CrossNER: Evaluating Cross-Domain Named Entity Recognition},
author={Zihan Liu and Yan Xu and Tiezheng Yu and Wenliang Dai and Ziwei Ji and Samuel Cahyawijaya and Andrea Madotto and Pascale Fung},
year={2020},
eprint={2012.04373},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 860 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: CrossNER is a cross-domain dataset for named entity recognition
size_categories:
- 10K<n<100K
source_datasets:
- extended|conll2003
tags:
- cross domain
- ai
- news
- music
- literature
- politics
- science
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
- config_name: ai
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 65080
num_examples: 100
- name: validation
num_bytes: 189453
num_examples: 350
- name: test
num_bytes: 225691
num_examples: 431
download_size: 289173
dataset_size: 480224
- config_name: literature
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 63181
num_examples: 100
- name: validation
num_bytes: 244076
num_examples: 400
- name: test
num_bytes: 270092
num_examples: 416
download_size: 334380
dataset_size: 577349
- config_name: music
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 65077
num_examples: 100
- name: validation
num_bytes: 259702
num_examples: 380
- name: test
num_bytes: 327195
num_examples: 465
download_size: 414065
dataset_size: 651974
- config_name: conll2003
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 3561081
num_examples: 14041
- name: validation
num_bytes: 891431
num_examples: 3250
- name: test
num_bytes: 811470
num_examples: 3453
download_size: 2694794
dataset_size: 5263982
- config_name: politics
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 143507
num_examples: 200
- name: validation
num_bytes: 422760
num_examples: 541
- name: test
num_bytes: 472690
num_examples: 651
download_size: 724168
dataset_size: 1038957
- config_name: science
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 121928
num_examples: 200
- name: validation
num_bytes: 276118
num_examples: 450
- name: test
num_bytes: 334181
num_examples: 543
download_size: 485191
dataset_size: 732227
---
# Dataset Card for CrossRE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [CrossNER](https://github.com/zliucr/CrossNER)
- **Paper:** [CrossNER: Evaluating Cross-Domain Named Entity Recognition](https://arxiv.org/abs/2012.04373)
### Dataset Summary
CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains
(Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specialized entity categories for
different domains. Additionally, CrossNER also includes unlabeled domain-related corpora for the corresponding five
domains.
For details, see the paper:
[CrossNER: Evaluating Cross-Domain Named Entity Recognition](https://arxiv.org/abs/2012.04373)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in CrossNER is in English (BCP-47 en)
## Dataset Structure
### Data Instances
#### conll2003
- **Size of downloaded dataset files:** 2.69 MB
- **Size of the generated dataset:** 5.26 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["EU", "rejects", "German", "call", "to", "boycott", "British", "lamb", "."],
"ner_tags": [49, 0, 41, 0, 0, 0, 41, 0, 0]
}
```
#### politics
- **Size of downloaded dataset files:** 0.72 MB
- **Size of the generated dataset:** 1.04 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["Parties", "with", "mainly", "Eurosceptic", "views", "are", "the", "ruling", "United", "Russia", ",", "and", "opposition", "parties", "the", "Communist", "Party", "of", "the", "Russian", "Federation", "and", "Liberal", "Democratic", "Party", "of", "Russia", "."],
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 55, 56, 0, 0, 0, 0, 0, 55, 56, 56, 56, 56, 56, 0, 55, 56, 56, 56, 56, 0]
}
```
#### science
- **Size of downloaded dataset files:** 0.49 MB
- **Size of the generated dataset:** 0.73 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["They", "may", "also", "use", "Adenosine", "triphosphate", ",", "Nitric", "oxide", ",", "and", "ROS", "for", "signaling", "in", "the", "same", "ways", "that", "animals", "do", "."],
"ner_tags": [0, 0, 0, 0, 15, 16, 0, 15, 16, 0, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
#### music
- **Size of downloaded dataset files:** 0.41 MB
- **Size of the generated dataset:** 0.65 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["In", "2003", ",", "the", "Stade", "de", "France", "was", "the", "primary", "site", "of", "the", "2003", "World", "Championships", "in", "Athletics", "."],
"ner_tags": [0, 0, 0, 0, 35, 36, 36, 0, 0, 0, 0, 0, 0, 29, 30, 30, 30, 30, 0]
}
```
#### literature
- **Size of downloaded dataset files:** 0.33 MB
- **Size of the generated dataset:** 0.58 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["In", "1351", ",", "during", "the", "reign", "of", "Emperor", "Toghon", "Temür", "of", "the", "Yuan", "dynasty", ",", "93rd-generation", "descendant", "Kong", "Huan", "(", "孔浣", ")", "'", "s", "2nd", "son", "Kong", "Shao", "(", "孔昭", ")", "moved", "from", "China", "to", "Korea", "during", "the", "Goryeo", ",", "and", "was", "received", "courteously", "by", "Princess", "Noguk", "(", "the", "Mongolian-born", "wife", "of", "the", "future", "king", "Gongmin", ")", "."],
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 51, 52, 52, 0, 0, 21, 22, 0, 0, 0, 77, 78, 0, 77, 0, 0, 0, 0, 0, 77, 78, 0, 77, 0, 0, 0, 21, 0, 21, 0, 0, 41, 0, 0, 0, 0, 0, 0, 51, 52, 0, 0, 41, 0, 0, 0, 0, 0, 51, 0, 0]
}
```
#### ai
- **Size of downloaded dataset files:** 0.29 MB
- **Size of the generated dataset:** 0.48 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["Popular", "approaches", "of", "opinion-based", "recommender", "system", "utilize", "various", "techniques", "including", "text", "mining", ",", "information", "retrieval", ",", "sentiment", "analysis", "(", "see", "also", "Multimodal", "sentiment", "analysis", ")", "and", "deep", "learning", "X.Y.", "Feng", ",", "H.", "Zhang", ",", "Y.J.", "Ren", ",", "P.H.", "Shang", ",", "Y.", "Zhu", ",", "Y.C.", "Liang", ",", "R.C.", "Guan", ",", "D.", "Xu", ",", "(", "2019", ")", ",", ",", "21", "(", "5", ")", ":", "e12957", "."],
"ner_tags": [0, 0, 0, 59, 60, 60, 0, 0, 0, 0, 31, 32, 0, 71, 72, 0, 71, 72, 0, 0, 0, 71, 72, 72, 0, 0, 31, 32, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "B-academicjournal": 1, "I-academicjournal": 2, "B-album": 3, "I-album": 4, "B-algorithm": 5, "I-algorithm": 6, "B-astronomicalobject": 7, "I-astronomicalobject": 8, "B-award": 9, "I-award": 10, "B-band": 11, "I-band": 12, "B-book": 13, "I-book": 14, "B-chemicalcompound": 15, "I-chemicalcompound": 16, "B-chemicalelement": 17, "I-chemicalelement": 18, "B-conference": 19, "I-conference": 20, "B-country": 21, "I-country": 22, "B-discipline": 23, "I-discipline": 24, "B-election": 25, "I-election": 26, "B-enzyme": 27, "I-enzyme": 28, "B-event": 29, "I-event": 30, "B-field": 31, "I-field": 32, "B-literarygenre": 33, "I-literarygenre": 34, "B-location": 35, "I-location": 36, "B-magazine": 37, "I-magazine": 38, "B-metrics": 39, "I-metrics": 40, "B-misc": 41, "I-misc": 42, "B-musicalartist": 43, "I-musicalartist": 44, "B-musicalinstrument": 45, "I-musicalinstrument": 46, "B-musicgenre": 47, "I-musicgenre": 48, "B-organisation": 49, "I-organisation": 50, "B-person": 51, "I-person": 52, "B-poem": 53, "I-poem": 54, "B-politicalparty": 55, "I-politicalparty": 56, "B-politician": 57, "I-politician": 58, "B-product": 59, "I-product": 60, "B-programlang": 61, "I-programlang": 62, "B-protein": 63, "I-protein": 64, "B-researcher": 65, "I-researcher": 66, "B-scientist": 67, "I-scientist": 68, "B-song": 69, "I-song": 70, "B-task": 71, "I-task": 72, "B-theory": 73, "I-theory": 74, "B-university": 75, "I-university": 76, "B-writer": 77, "I-writer": 78}
```
### Data Splits
| | Train | Dev | Test |
|--------------|--------|-------|-------|
| conll2003 | 14,987 | 3,466 | 3,684 |
| politics | 200 | 541 | 651 |
| science | 200 | 450 | 543 |
| music | 100 | 380 | 456 |
| literature | 100 | 400 | 416 |
| ai | 100 | 350 | 431 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{liu2020crossner,
title={CrossNER: Evaluating Cross-Domain Named Entity Recognition},
author={Zihan Liu and Yan Xu and Tiezheng Yu and Wenliang Dai and Ziwei Ji and Samuel Cahyawijaya and Andrea Madotto and Pascale Fung},
year={2020},
eprint={2012.04373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. |
d0rj/curation-corpus | 2023-06-13T13:25:32.000Z | [
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"news",
"summarization",
"region:us"
] | d0rj | null | null | null | 0 | 859 | ---
dataset_info:
features:
- name: title
dtype: string
- name: summary
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: article_content
dtype: string
splits:
- name: train
num_bytes: 127948910
num_examples: 30455
download_size: 76620775
dataset_size: 127948910
license: cc-by-4.0
task_categories:
- summarization
multilinguality:
- monolingual
language:
- en
source_datasets:
- original
tags:
- news
- summarization
pretty_name: Curation Corpus for Abstractive Text Summarisation
paperswithcode_id: curation-corpus
size_categories:
- 10K<n<100K
---
# curation-corpus
## Dataset Description
- **Homepage:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus)
- **Repository:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus)
## Source
Data from [this official repo](https://github.com/CurationCorp/curation-corpus) with downloaded news articles content.
## Citation
```
@misc{curationcorpusbase:2020,
title={Curation Corpus Base},
author={Curation},
year={2020}
}
``` |
ambig_qa | 2022-11-03T16:31:34.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|natural_questions",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:2004.10645",
"region:us"
] | null | AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with
14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.
We provide two distributions of our new dataset AmbigNQ: a full version with all annotation metadata and a light version with only inputs and outputs. | @inproceedings{ min2020ambigqa,
title={ {A}mbig{QA}: Answering Ambiguous Open-domain Questions },
author={ Min, Sewon and Michael, Julian and Hajishirzi, Hannaneh and Zettlemoyer, Luke },
booktitle={ EMNLP },
year={2020}
} | null | 2 | 854 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|natural_questions
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: ambigqa
pretty_name: 'AmbigQA: Answering Ambiguous Open-domain Questions'
dataset_info:
- config_name: light
features:
- name: id
dtype: string
- name: question
dtype: string
- name: annotations
sequence:
- name: type
dtype: string
- name: answer
sequence: string
- name: qaPairs
sequence:
- name: question
dtype: string
- name: answer
sequence: string
splits:
- name: train
num_bytes: 2739732
num_examples: 10036
- name: validation
num_bytes: 805808
num_examples: 2002
download_size: 19700900
dataset_size: 3545540
- config_name: full
features:
- name: id
dtype: string
- name: question
dtype: string
- name: annotations
sequence:
- name: type
dtype: string
- name: answer
sequence: string
- name: qaPairs
sequence:
- name: question
dtype: string
- name: answer
sequence: string
- name: viewed_doc_titles
sequence: string
- name: used_queries
sequence:
- name: query
dtype: string
- name: results
sequence:
- name: title
dtype: string
- name: snippet
dtype: string
- name: nq_answer
sequence: string
- name: nq_doc_title
dtype: string
splits:
- name: train
num_bytes: 43538733
num_examples: 10036
- name: validation
num_bytes: 15383368
num_examples: 2002
download_size: 19700900
dataset_size: 58922101
---
# Dataset Card for AmbigQA: Answering Ambiguous Open-domain Questions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- [**Homepage:**](https://nlp.cs.washington.edu/ambigqa/)
- [**Repository:**](https://github.com/shmsw25/AmbigQA)
- [**Paper:**](https://arxiv.org/pdf/2004.10645.pdf)
### Dataset Summary
AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with
14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.
We provide two distributions of our new dataset AmbigNQ: a `full` version with all annotation metadata and a `light` version with only inputs and outputs.
### Supported Tasks and Leaderboards
`question-answering`
### Languages
English
## Dataset Structure
### Data Instances
An example from the data set looks as follows:
```
{'annotations': {'answer': [[]],
'qaPairs': [{'answer': [['April 19, 1987'], ['December 17, 1989']],
'question': ['When did the Simpsons first air on television as an animated short on the Tracey Ullman Show?',
'When did the Simpsons first air as a half-hour prime time show?']}],
'type': ['multipleQAs']},
'id': '-4469503464110108318',
'nq_answer': ['December 17 , 1989'],
'nq_doc_title': 'The Simpsons',
'question': 'When did the simpsons first air on television?',
'used_queries': {'query': ['When did the simpsons first air on television?'],
'results': [{'snippet': ['The <b>Simpsons</b> is an American animated <b>television</b> sitcom starring the animated \nSimpson family, ... Since its <b>debut</b> on December 17, 1989, the show <b>has</b> \nbroadcast 673 episodes and its 30th season started ... The <b>Simpsons first</b> season \n<b>was</b> the Fox network's <b>first TV</b> series to rank among a season's top 30 highest-\nrated shows.',
'The <b>Simpsons</b> is an American animated sitcom created by Matt Groening for the \nFox ... Since its <b>debut</b> on December 17, 1989, 674 episodes of The <b>Simpsons</b> \nhave been broadcast. ... When producer James L. Brooks <b>was</b> working on the \n<b>television</b> variety show The Tracey Ullman Show, he decided to include small \nanimated ...',
'... in shorts from The Tracey Ullman Show as their <b>television debut</b> in 1987. The \n<b>Simpsons</b> shorts are a series of animated shorts that <b>aired</b> as a recurring \nsegment on Fox variety <b>television</b> series The Tracey ... The final short to <b>air was</b> "\n<b>TV Simpsons</b>", originally airing on May 14, 1989. The <b>Simpsons</b> later debuted on\n ...',
'The <b>first</b> season of the American animated <b>television</b> series The <b>Simpsons</b> \noriginally <b>aired</b> on the Fox network between December 17, 1989, and May 13, \n1990, beginning with the Christmas special "<b>Simpsons</b> Roasting on an Open Fire\n". The executive producers for the <b>first</b> production season <b>were</b> Matt Groening, ...',
'The <b>Simpsons</b> is an American animated <b>television</b> sitcom created by Matt \nGroening for the Fox ... Since its <b>debut</b> on December 17, 1989, The <b>Simpsons</b> \n<b>has</b> broadcast 674 episodes. The show holds several American <b>television</b> \nlongevity ...',
'The opening sequence of the American animated <b>television</b> series The <b>Simpsons</b> \nis among the most popular opening sequences in <b>television</b> and is accompanied \nby one of <b>television's</b> most recognizable theme songs. The <b>first</b> episode to use \nthis intro <b>was</b> the series' second episode "Bart the ... <b>was</b> the <b>first</b> episode of The \n<b>Simpsons</b> to <b>air</b> in 720p high-definition <b>television</b>, ...',
'"<b>Simpsons</b> Roasting on an Open Fire", titled onscreen as "The <b>Simpsons</b> \nChristmas Special", is the premiere episode of the American animated <b>TV</b> series \nThe <b>Simpsons</b>, ... The show <b>was</b> originally intended to <b>debut</b> earlier in 1989 with "\nSome Enchanted Evening", but due to animation problems with that episode, the \nshow ...',
'"Stark Raving Dad" is the <b>first</b> episode of the third season of the American \nanimated <b>television</b> series The <b>Simpsons</b>. It <b>first aired</b> on the Fox network in the \nUnited States on September 19, 1991. ... The <b>Simpsons was</b> the second highest \nrated show on Fox the week it <b>aired</b>, behind Married... with Children. "Stark \nRaving Dad," ...',
'The <b>Simpsons</b>' twentieth season <b>aired</b> on Fox from September 28, 2008 to May \n17, 2009. With this season, the show tied Gunsmoke as the longest-running \nAmerican primetime <b>television</b> series in terms of total number ... It <b>was</b> the <b>first</b>-\never episode of the show to <b>air</b> in Europe before being seen in the United States.',
'The animated <b>TV</b> show The <b>Simpsons</b> is an American English language \nanimated sitcom which ... The <b>Simpsons was</b> dubbed for the <b>first</b> time in Punjabi \nand <b>aired</b> on Geo <b>TV</b> in Pakistan. The name of the localised Punjabi version is \nTedi Sim ...'],
'title': ['History of The Simpsons',
'The Simpsons',
'The Simpsons shorts',
'The Simpsons (season 1)',
'List of The Simpsons episodes',
'The Simpsons opening sequence',
'Simpsons Roasting on an Open Fire',
'Stark Raving Dad',
'The Simpsons (season 20)',
'Non-English versions of The Simpsons']}]},
'viewed_doc_titles': ['The Simpsons']}
```
### Data Fields
Full
```
{'id': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None),
'annotations': Sequence(feature={'type': Value(dtype='string', id=None), 'answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'qaPairs': Sequence(feature={'question': Value(dtype='string', id=None), 'answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, length=-1, id=None)}, length=-1, id=None),
'viewed_doc_titles': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'used_queries': Sequence(feature={'query': Value(dtype='string', id=None), 'results': Sequence(feature={'title': Value(dtype='string', id=None), 'snippet': Value(dtype='string', id=None)}, length=-1, id=None)}, length=-1, id=None),
'nq_answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'nq_doc_title': Value(dtype='string', id=None)}
```
In the original data format `annotations` have different keys depending on the `type` field = `singleAnswer` or `multipleQAs`. But this implementation uses an empty list `[]` for the unavailable keys
please refer to Dataset Contents(https://github.com/shmsw25/AmbigQA#dataset-contents) for more details.
```
for example in train_light_dataset:
for i,t in enumerate(example['annotations']['type']):
if t =='singleAnswer':
# use the example['annotations']['answer'][i]
# example['annotations']['qaPairs'][i] - > is []
print(example['annotations']['answer'][i])
else:
# use the example['annotations']['qaPairs'][i]
# example['annotations']['answer'][i] - > is []
print(example['annotations']['qaPairs'][i])
```
please refer to Dataset Contents(https://github.com/shmsw25/AmbigQA#dataset-contents) for more details.
Light version only has `id`, `question`, `annotations` fields
### Data Splits
- train: 10036
- validation: 2002
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
- Wikipedia
- NQ-open:
```
@article{ kwiatkowski2019natural,
title={ Natural questions: a benchmark for question answering research},
author={ Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and others },
journal={ Transactions of the Association for Computational Linguistics },
year={ 2019 }
}
```
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/)
### Citation Information
```
@inproceedings{ min2020ambigqa,
title={ {A}mbig{QA}: Answering Ambiguous Open-domain Questions },
author={ Min, Sewon and Michael, Julian and Hajishirzi, Hannaneh and Zettlemoyer, Luke },
booktitle={ EMNLP },
year={2020}
}
```
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. |
ybelkada/football-dataset | 2023-01-17T11:47:41.000Z | [
"region:us"
] | ybelkada | null | null | null | 0 | 852 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2073622.0
num_examples: 6
download_size: 2074835
dataset_size: 2073622.0
---
# Dataset Card for "football-dataset"
Dummy dataset of 6 football players with a caption that can be used to fine-tune any Image Captioning model. |
THUDM/ImageRewardDB | 2023-06-21T06:36:29.000Z | [
"task_categories:text-to-image",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"arxiv:2304.05977",
"region:us"
] | THUDM | ImageRewardDB is a comprehensive text-to-image comparison dataset, focusing on text-to-image human preference. It consists of 137k pairs of expert comparisons, based on text prompts and corresponding model outputs from DiffusionDB. To build the ImageRewadDB, we design a pipeline tailored for it, establishing criteria for quantitative assessment and annotator training, optimizing labeling experience, and ensuring quality validation. \ | @misc{xu2023imagereward,
title={ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation},
author={Jiazheng Xu and Xiao Liu and Yuchen Wu and Yuxuan Tong and Qinkai Li and Ming Ding and Jie Tang and Yuxiao Dong},
year={2023},
eprint={2304.05977},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | null | 16 | 850 | ---
license: apache-2.0
task_categories:
- text-to-image
language:
- en
pretty_name: ImageReward Dataset
size_categories:
- 100K<n<1M
---
# ImageRewardDB
## Dataset Description
- **Homepage: https://huggingface.co/datasets/wuyuchen/ImageRewardDB**
- **Repository: https://github.com/THUDM/ImageReward**
- **Paper: https://arxiv.org/abs/2304.05977**
### Dataset Summary
ImageRewardDB is a comprehensive text-to-image comparison dataset, focusing on text-to-image human preference.
It consists of 137k pairs of expert comparisons, based on text prompts and corresponding model outputs from DiffusionDB.
To build the ImageRewadDB, we design a pipeline tailored for it, establishing criteria for quantitative assessment and
annotator training, optimizing labeling experience, and ensuring quality validation. And ImageRewardDB is now publicly available at
[🤗 Hugging Face Dataset](https://huggingface.co/datasets/wuyuchen/ImageRewardDB).
Notice: All images in ImageRewardDB are collected from DiffusionDB, and in addition, we gathered together images corresponding to the same prompt.
### Languages
The text in the dataset is all in English.
### Four Subsets
Considering that the ImageRewardDB contains a large number of images, we provide four subsets in different scales to support different needs.
For all subsets, the validation and test splits remain the same. The validation split(1.10GB) contains 412 prompts and 2.6K images(7.32K pairs) and
the test(1.16GB) split contains 466 prompts and 2.7K images(7.23K pairs). The information on the train split in different scales is as follows:
|Subset|Num of Pairs|Num of Images|Num of Prompts|Size|
|:--|--:|--:|--:|--:|
|ImageRewardDB 1K|17.6K|6.2K|1K|2.7GB|
|ImageRewardDB 2K|35.5K|12.5K|2K|5.5GB|
|ImageRewardDB 4K|71.0K|25.1K|4K|10.8GB|
|ImageRewardDB 8K|141.1K|49.9K|8K|20.9GB|
## Dataset Structure
All the data in this repository is stored in a well-organized way. The 62.6K images in ImageRewardDB are split into several folders,
stored in corresponding directories under "./images" according to its split. Each folder contains around 500 prompts, their corresponding
images, and a JSON file. The JSON file links the image with its corresponding prompt and annotation.
The file structure is as follows:
```
# ImageRewardDB
./
├── images
│ ├── train
│ │ ├── train_1
│ │ │ ├── 0a1ed3a5-04f6-4a1b-aee6-d584e7c8ed9c.webp
│ │ │ ├── 0a58cfa8-ff61-4d31-9757-27322aec3aaf.webp
│ │ │ ├── [...]
│ │ │ └── train_1.json
│ │ ├── train_2
│ │ ├── train_3
│ │ ├── [...]
│ │ └── train_32
│ ├── validation
│ │ └── [...]
│ └── test
│ └── [...]
├── metadata-train.parquet
├── metadata-validation.parquet
└── metadata-test.parquet
```
The sub-folders have the name of {split_name}_{part_id}, and the JSON file has the same name as the sub-folder.
Each image is a lossless WebP file and has a unique name generated by [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier).
### Data Instances
For instance, below is the image of `1b4b2d61-89c2-4091-a1c0-f547ad5065cb.webp` and its information in train_1.json.
```json
{
"image_path": "images/train/train_1/0280642d-f69f-41d1-8598-5a44e296aa8b.webp",
"prompt_id": "000864-0061",
"prompt": "painting of a holy woman, decorated, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha, 8 k ",
"classification": "People",
"image_amount_in_total": 9,
"rank": 5,
"overall_rating": 4,
"image_text_alignment_rating": 3,
"fidelity_rating": 4
}
```
### Data Fields
* image: The image object
* prompt_id: The id of the corresponding prompt
* prompt: The text of the corresponding prompt
* classification: The classification of the corresponding prompt
* image_amount_in_total: Total amount of images related to the prompt
* rank: The relative rank of the image in all related images
* overall_rating: The overall score of this image
* image_text_alignment_rating: The score of how well the generated image matches the given text
* fidelity_rating: The score of whether the output image is true to the shape and characteristics that the object should have
### Data Splits
As we mentioned above, all scales of the subsets we provided have three splits of "train", "validation", and "test".
And all the subsets share the same validation and test splits.
### Dataset Metadata
We also include three metadata tables `metadata-train.parquet`, `metadata-validation.parquet`, and `metadata-test.parquet` to
help you access and comprehend ImageRewardDB without downloading the Zip files.
All the tables share the same schema, and each row refers to an image. The schema is shown below,
and actually, the JSON files we mentioned above share the same schema:
|Column|Type|Description|
|:---|:---|:---|
|`image_path`|`string`|The relative path of the image in the repository.|
|`prompt_id`|`string`|The id of the corresponding prompt.|
|`prompt`|`string`|The text of the corresponding prompt.|
|`classification`|`string`| The classification of the corresponding prompt.|
|`image_amount_in_total`|`int`| Total amount of images related to the prompt.|
|`rank`|`int`| The relative rank of the image in all related images.|
|`overall_rating`|`int`| The overall score of this image.
|`image_text_alignment_rating`|`int`|The score of how well the generated image matches the given text.|
|`fidelity_rating`|`int`|The score of whether the output image is true to the shape and characteristics that the object should have.|
Below is an example row from metadata-train.parquet.
|image_path|prompt_id|prompt|classification|image_amount_in_total|rank|overall_rating|image_text_alignment_rating|fidelity_rating|
|:---|:---|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---|:---|:---|:---|:---|:---|
|images/train/train_1/1b4b2d61-89c2-4091-a1c0-f547ad5065cb.webp|001324-0093|a magical forest that separates the good world from the dark world, ...|Outdoor Scenes|8|3|6|6|6|
## Loading ImageRewardDB
You can use the Hugging Face [Datasets](https://huggingface.co/docs/datasets/quickstart) library to easily load the ImageRewardDB.
As we mentioned before, we provide four subsets in the scales of 1k, 2k, 4k, and 8k. You can load them using as following:
```python
from datasets import load_dataset
# Load the 1K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "1k")
# Load the 2K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "2k")
# Load the 4K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "4K")
# Load the 8K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "8k")
```
## Additional Information
### Licensing Information
The ImageRewardDB dataset is available under the [Apache license 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```
@misc{xu2023imagereward,
title={ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation},
author={Jiazheng Xu and Xiao Liu and Yuchen Wu and Yuxuan Tong and Qinkai Li and Ming Ding and Jie Tang and Yuxiao Dong},
year={2023},
eprint={2304.05977},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
mlabonne/guanaco-llama2 | 2023-07-26T14:49:17.000Z | [
"region:us"
] | mlabonne | null | null | null | 7 | 849 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15409089
num_examples: 9846
- name: test
num_bytes: 815811
num_examples: 518
download_size: 9461517
dataset_size: 16224900
---
# Guanaco: Lazy Llama 2 Formatting
This is the excellent [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2).
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for [this article](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) about fine-tuning a Llama 2 model in a Google Colab.
|
theblackcat102/evol-codealpaca-v1 | 2023-09-07T11:42:00.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"code",
"region:us"
] | theblackcat102 | null | null | null | 65 | 848 | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- en
tags:
- code
size_categories:
- 100K<n<1M
---
## Evolved codealpaca
Updates:
* 2023/08/26 - Filtered results now only contain pure english instruction and removed any mentioned of trained by OAI response
Median sequence length : 471
We employed a methodology similar to that of [WizardCoder](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0), with the exception that ours is open-source. We used the gpt-4-0314 and gpt-4-0613 models to augment and answer each response, with the bulk of generation handled by gpt-4-0314.
The aim of this dataset is twofold: firstly, to facilitate the recreation of other wizardcoder models using newer pretrained models, such as LLaMA-2; and secondly, to serve as a testing ground for the [evol-dataset](https://github.com/theblackcat102/evol-dataset) package, as we strive to develop improved future augmentation strategies.
We used a total of [10 strategies](https://github.com/theblackcat102/evol-dataset/tree/main/evolinstruct/instructions) to augment the [HuggingFaceH4/CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K) dataset and create our own.
It's important to note that we introduced a new "language" augmentation strategy in this project, which enables the conversion of existing instructions into Chinese.
A Chinese code evol version is now available here : [theblackcat102/evol-code-zh](https://huggingface.co/datasets/theblackcat102/evol-code-zh)
## Comparison to existing dataset
Comparing to [nickrosh/Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1), evol-codealpaca-v1 has longer instruction and output conversation

### Citation
If you use this dataset to finetune any LLMs just cite wizard coder
```
@misc{luo2023wizardcoder,
title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang},
year={2023},
}
``` |
squad_adversarial | 2022-11-18T21:47:43.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|squad",
"language:en",
"license:mit",
"region:us"
] | null | Here are two different adversaries, each of which uses a different procedure to pick the sentence it adds to the paragraph:
AddSent: Generates up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. Picks the one that most confuses the model.
AddOneSent: Similar to AddSent, but just picks one of the candidate sentences at random. This adversary is does not query the model in any way. | @inproceedings{jia-liang-2017-adversarial,
title = "Adversarial Examples for Evaluating Reading Comprehension Systems",
author = "Jia, Robin and
Liang, Percy",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D17-1215",
doi = "10.18653/v1/D17-1215",
pages = "2021--2031",
abstract = "Standard accuracy metrics indicate that reading comprehension systems are making rapid progress, but the extent to which these systems truly understand language remains unclear. To reward systems with real language understanding abilities, we propose an adversarial evaluation scheme for the Stanford Question Answering Dataset (SQuAD). Our method tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences, which are automatically generated to distract computer systems without changing the correct answer or misleading humans. In this adversarial setting, the accuracy of sixteen published models drops from an average of 75% F1 score to 36%; when the adversary is allowed to add ungrammatical sequences of words, average accuracy on four models decreases further to 7%. We hope our insights will motivate the development of new models that understand language more precisely.",
} | null | 5 | 847 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|squad
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: null
pretty_name: '''Adversarial Examples for SQuAD'''
dataset_info:
- config_name: squad_adversarial
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: AddSent
num_bytes: 3803551
num_examples: 3560
- name: AddOneSent
num_bytes: 1864767
num_examples: 1787
download_size: 5994513
dataset_size: 5668318
- config_name: AddSent
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 3803551
num_examples: 3560
download_size: 5994513
dataset_size: 3803551
- config_name: AddOneSent
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1864767
num_examples: 1787
download_size: 5994513
dataset_size: 1864767
---
# Dataset Card for 'Adversarial Examples for SQuAD'
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- [**Homepage**](https://worksheets.codalab.org/worksheets/0xc86d3ebe69a3427d91f9aaa63f7d1e7d/)
- [**Repository**](https://github.com/robinjia/adversarial-squad/)
- [**Paper**](https://www.aclweb.org/anthology/D17-1215/)
### Dataset Summary
Standard accuracy metrics indicate that reading comprehension systems are making rapid progress, but the extent to which these systems truly understand language remains unclear. To reward systems with real language understanding abilities, we propose an adversarial evaluation scheme for the Stanford Question Answering Dataset (SQuAD). Our method tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences, which are automatically generated to distract computer systems without changing the correct answer or misleading humans.
### Supported Tasks and Leaderboards
`question-answering`, `adversarial attack`
### Languages
English
## Dataset Structure
Follows the standart SQuAD format.
### Data Instances
An example from the data set looks as follows:
```py
{'answers': {'answer_start': [334, 334, 334],
'text': ['February 7, 2016', 'February 7', 'February 7, 2016']},
'context': 'Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi\'s Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as "Super Bowl L"), so that the logo could prominently feature the Arabic numerals 50. The Champ Bowl was played on August 18th,1991.',
'id': '56bea9923aeaaa14008c91bb-high-conf-turk2',
'question': 'What day was the Super Bowl played on?',
'title': 'Super_Bowl_50'}
```
`id` field is formed like: [original_squad_id]-[annotator_id]
### Data Fields
```py
{'id': Value(dtype='string', id=None), # id of example (same as SQuAD) OR SQuAD-id-[annotator_id] for adversarially modified examples
'title': Value(dtype='string', id=None), # title of document the context is from (same as SQuAD)
'context': Value(dtype='string', id=None), # the context (same as SQuAD) +adversarially added sentence
'question': Value(dtype='string', id=None), # the question (same as SQuAD)
'answers': Sequence(feature={'text': Value(dtype='string', id=None), # the answer (same as SQuAD)
'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) # the answer_start index (same as SQuAD)
}
```
### Data Splits
- AddSent: Has up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. This adversary is does not query the model in any way.
- AddOneSent: Similar to AddSent, but just one candidate sentences was picked at random. This adversary is does not query the model in any way.
Number of Q&A pairs
- AddSent : 3560
- AddOneSent: 1787
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
SQuAD dev set (+with adversarial sentences added)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[MIT License](https://github.com/robinjia/adversarial-squad/blob/master/LICENSE)
### Citation Information
```
@inproceedings{jia-liang-2017-adversarial,
title = "Adversarial Examples for Evaluating Reading Comprehension Systems",
author = "Jia, Robin and
Liang, Percy",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D17-1215",
doi = "10.18653/v1/D17-1215",
pages = "2021--2031",
abstract = "Standard accuracy metrics indicate that reading comprehension systems are making rapid progress, but the extent to which these systems truly understand language remains unclear. To reward systems with real language understanding abilities, we propose an adversarial evaluation scheme for the Stanford Question Answering Dataset (SQuAD). Our method tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences, which are automatically generated to distract computer systems without changing the correct answer or misleading humans. In this adversarial setting, the accuracy of sixteen published models drops from an average of 75% F1 score to 36%; when the adversary is allowed to add ungrammatical sequences of words, average accuracy on four models decreases further to 7%. We hope our insights will motivate the development of new models that understand language more precisely.",
}
```
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. |
BeIR/fiqa | 2022-10-23T06:00:28.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 3 | 844 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
qanastek/MASSIVE | 2022-12-23T21:28:08.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:az",
"language:bn",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:km",
"language:kn",
"language:ko",
"language:lv",
"language:ml",
"language:mn",
"language:ms",
"language:my",
"language:nb",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sl",
"language:sq",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:ur",
"language:vi",
"language:zh",
"arxiv:2204.08582",
"region:us"
] | qanastek | MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations
for the Natural Language Understanding tasks of intent prediction and slot annotation.
Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing
the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions. | @misc{fitzgerald2022massive,
title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages},
author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
year={2022},
eprint={2204.08582},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{bastianelli-etal-2020-slurp,
title = "{SLURP}: A Spoken Language Understanding Resource Package",
author = "Bastianelli, Emanuele and
Vanzo, Andrea and
Swietojanski, Pawel and
Rieser, Verena",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.588",
doi = "10.18653/v1/2020.emnlp-main.588",
pages = "7252--7262",
abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
} | null | 16 | 830 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- af
- am
- ar
- az
- bn
- cy
- da
- de
- el
- en
- es
- fa
- fi
- fr
- he
- hi
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- km
- kn
- ko
- lv
- ml
- mn
- ms
- my
- nb
- nl
- pl
- pt
- ro
- ru
- sl
- sq
- sv
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- zh
- zh
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
- named-entity-recognition
pretty_name: MASSIVE
language_bcp47:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
---
# MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/alexa/massive
- **Repository:** https://github.com/alexa/massive
- **Paper:** https://arxiv.org/abs/2204.08582
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1697/overview
- **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues)
### Dataset Summary
MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
| Name | Lang | Utt/Lang | Domains | Intents | Slots |
|:-------------------------------------------------------------------------------:|:-------:|:--------------:|:-------:|:--------:|:------:|
| MASSIVE | 51 | 19,521 | 18 | 60 | 55 |
| SLURP (Bastianelli et al., 2020) | 1 | 16,521 | 18 | 60 | 55 |
| NLU Evaluation Data (Liu et al., 2019) | 1 | 25,716 | 18 | 54 | 56 |
| Airline Travel Information System (ATIS) (Price, 1990) | 1 | 5,871 | 1 | 26 | 129 |
| ATIS with Hindi and Turkish (Upadhyay et al., 2018) | 3 | 1,315-5,871 | 1 | 26 | 129 |
| MultiATIS++ (Xu et al., 2020) | 9 | 1,422-5,897 | 1 | 21-26 | 99-140 |
| Snips (Coucke et al., 2018) | 1 | 14,484 | - | 7 | 53 |
| Snips with French (Saade et al., 2019) | 2 | 4,818 | 2 | 14-15 | 11-12 |
| Task Oriented Parsing (TOP) (Gupta et al., 2018) | 1 | 44,873 | 2 | 25 | 36 |
| Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) | 6 | 15,195-22,288 | 11 | 104-113 | 72-75 |
| Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) | 3 | 5,083-43,323 | 3 | 12 | 11 |
| Microsoft Dialog Challenge (Li et al., 2018) | 1 | 38,276 | 3 | 11 | 29 |
| Fluent Speech Commands (FSC) (Lugosch et al., 2019) | 1 | 30,043 | - | 31 | - |
| Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) | 1 | 16,258 | 4 | - | 94 |
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `natural-language-understanding` (NLU) :
- `intent-classification`
- `multi-class-classification`
- `natural-language-understanding`
### Languages
The corpora consists of parallel sentences from 51 languages :
- `Afrikaans - South Africa (af-ZA)`
- `Amharic - Ethiopia (am-ET)`
- `Arabic - Saudi Arabia (ar-SA)`
- `Azeri - Azerbaijan (az-AZ)`
- `Bengali - Bangladesh (bn-BD)`
- `Chinese - China (zh-CN)`
- `Chinese - Taiwan (zh-TW)`
- `Danish - Denmark (da-DK)`
- `German - Germany (de-DE)`
- `Greek - Greece (el-GR)`
- `English - United States (en-US)`
- `Spanish - Spain (es-ES)`
- `Farsi - Iran (fa-IR)`
- `Finnish - Finland (fi-FI)`
- `French - France (fr-FR)`
- `Hebrew - Israel (he-IL)`
- `Hungarian - Hungary (hu-HU)`
- `Armenian - Armenia (hy-AM)`
- `Indonesian - Indonesia (id-ID)`
- `Icelandic - Iceland (is-IS)`
- `Italian - Italy (it-IT)`
- `Japanese - Japan (ja-JP)`
- `Javanese - Indonesia (jv-ID)`
- `Georgian - Georgia (ka-GE)`
- `Khmer - Cambodia (km-KH)`
- `Korean - Korea (ko-KR)`
- `Latvian - Latvia (lv-LV)`
- `Mongolian - Mongolia (mn-MN)`
- `Malay - Malaysia (ms-MY)`
- `Burmese - Myanmar (my-MM)`
- `Norwegian - Norway (nb-NO)`
- `Dutch - Netherlands (nl-NL)`
- `Polish - Poland (pl-PL)`
- `Portuguese - Portugal (pt-PT)`
- `Romanian - Romania (ro-RO)`
- `Russian - Russia (ru-RU)`
- `Slovanian - Slovania (sl-SL)`
- `Albanian - Albania (sq-AL)`
- `Swedish - Sweden (sv-SE)`
- `Swahili - Kenya (sw-KE)`
- `Hindi - India (hi-IN)`
- `Kannada - India (kn-IN)`
- `Malayalam - India (ml-IN)`
- `Tamil - India (ta-IN)`
- `Telugu - India (te-IN)`
- `Thai - Thailand (th-TH)`
- `Tagalog - Philippines (tl-PH)`
- `Turkish - Turkey (tr-TR)`
- `Urdu - Pakistan (ur-PK)`
- `Vietnamese - Vietnam (vi-VN)`
- `Welsh - United Kingdom (cy-GB)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/MASSIVE", "en-US", split='train')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```json
{
"id": "1",
"locale": "fr-FR",
"partition": "train",
"scenario": 16,
"intent": 48,
"utt": "réveille-moi à neuf heures du matin le vendredi",
"annot_utt": "réveille-moi à [time : neuf heures du matin] le [date : vendredi]",
"tokens": [
"réveille-moi",
"à",
"neuf",
"heures",
"du",
"matin",
"le",
"vendredi"
],
"ner_tags": [0, 0, 71, 6, 6, 6, 0, 14],
"worker_id": "22",
"slot_method": {
"slot": ["time", "date"],
"method": ["translation", "translation"]
},
"judgments": {
"worker_id": ["11", "22", "0"],
"intent_score": [2, 1, 1],
"slots_score": [1, 1, 1],
"grammar_score": [3, 4, 4],
"spelling_score": [2, 2, 2],
"language_identification": ["target", "target", "target"]
}
}
```
### Data Fields (taken from Alexa Github)
`id`: maps to the original ID in the [SLURP](https://github.com/pswietojanski/slurp) collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
`locale`: is the language and country code accoring to ISO-639-1 and ISO-3166.
`partition`: is either `train`, `dev`, or `test`, according to the original split in [SLURP](https://github.com/pswietojanski/slurp).
`scenario`: is the general domain, aka "scenario" in SLURP terminology, of an utterance
`intent`: is the specific intent of an utterance within a domain formatted as `{scenario}_{intent}`
`utt`: the raw utterance text without annotations
`annot_utt`: the text from `utt` with slot annotations formatted as `[{label} : {entity}]`
`worker_id`: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
`slot_method`: for each slot in the utterance, whether that slot was a `translation` (i.e., same expression just in the target language), `localization` (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or `unchanged` (i.e., the original en-US slot value was copied over without modification).
`judgments`: Each judgment collected for the localized utterance has 6 keys. `worker_id` is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
```plain
intent_score : "Does the sentence match the intent?"
0: No
1: Yes
2: It is a reasonable interpretation of the goal
slots_score : "Do all these terms match the categories in square brackets?"
0: No
1: Yes
2: There are no words in square brackets (utterance without a slot)
grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?"
0: Completely unnatural (nonsensical, cannot be understood at all)
1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language)
2: Some errors (the meaning can be understood but it doesn't sound natural in your language)
3: Good enough (easily understood and sounds almost natural in your language)
4: Perfect (sounds natural in your language)
spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error."
0: There are more than 2 spelling errors
1: There are 1-2 spelling errors
2: All words are spelled correctly
language_identification : "The following sentence contains words in the following languages (check all that apply)"
1: target
2: english
3: other
4: target & english
5: target & other
6: english & other
7: target & english & other
```
### Data Splits
|Language|Train|Dev|Test|
|:---:|:---:|:---:|:---:|
|af-ZA|11514|2033|2974|
|am-ET|11514|2033|2974|
|ar-SA|11514|2033|2974|
|az-AZ|11514|2033|2974|
|bn-BD|11514|2033|2974|
|cy-GB|11514|2033|2974|
|da-DK|11514|2033|2974|
|de-DE|11514|2033|2974|
|el-GR|11514|2033|2974|
|en-US|11514|2033|2974|
|es-ES|11514|2033|2974|
|fa-IR|11514|2033|2974|
|fi-FI|11514|2033|2974|
|fr-FR|11514|2033|2974|
|he-IL|11514|2033|2974|
|hi-IN|11514|2033|2974|
|hu-HU|11514|2033|2974|
|hy-AM|11514|2033|2974|
|id-ID|11514|2033|2974|
|is-IS|11514|2033|2974|
|it-IT|11514|2033|2974|
|ja-JP|11514|2033|2974|
|jv-ID|11514|2033|2974|
|ka-GE|11514|2033|2974|
|km-KH|11514|2033|2974|
|kn-IN|11514|2033|2974|
|ko-KR|11514|2033|2974|
|lv-LV|11514|2033|2974|
|ml-IN|11514|2033|2974|
|mn-MN|11514|2033|2974|
|ms-MY|11514|2033|2974|
|my-MM|11514|2033|2974|
|nb-NO|11514|2033|2974|
|nl-NL|11514|2033|2974|
|pl-PL|11514|2033|2974|
|pt-PT|11514|2033|2974|
|ro-RO|11514|2033|2974|
|ru-RU|11514|2033|2974|
|sl-SL|11514|2033|2974|
|sq-AL|11514|2033|2974|
|sv-SE|11514|2033|2974|
|sw-KE|11514|2033|2974|
|ta-IN|11514|2033|2974|
|te-IN|11514|2033|2974|
|th-TH|11514|2033|2974|
|tl-PH|11514|2033|2974|
|tr-TR|11514|2033|2974|
|ur-PK|11514|2033|2974|
|vi-VN|11514|2033|2974|
|zh-CN|11514|2033|2974|
|zh-TW|11514|2033|2974|
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Amazon Alexa.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__MASSIVE__: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
__SLURP__: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
Copyright Amazon.com Inc. or its affiliates.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public licenses.
Notwithstanding, Creative Commons may elect to apply one of its public
licenses to material it publishes and in those instances will be
considered the “Licensor.” The text of the Creative Commons public
licenses is dedicated to the public domain under the CC0 Public Domain
Dedication. Except for the limited purpose of indicating that material
is shared under a Creative Commons public license or as otherwise
permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.
```
### Citation Information
Please cite the following paper when using this dataset.
```latex
@misc{fitzgerald2022massive,
title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages},
author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
year={2022},
eprint={2204.08582},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{bastianelli-etal-2020-slurp,
title = "{SLURP}: A Spoken Language Understanding Resource Package",
author = "Bastianelli, Emanuele and
Vanzo, Andrea and
Swietojanski, Pawel and
Rieser, Verena",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.588",
doi = "10.18653/v1/2020.emnlp-main.588",
pages = "7252--7262",
abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
}
```
|
fusing/instructpix2pix-1000-samples | 2023-02-23T07:08:49.000Z | [
"region:us"
] | fusing | null | null | null | 4 | 830 | ---
dataset_info:
features:
- name: input_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 416880759.0
num_examples: 1000
download_size: 416899514
dataset_size: 416880759.0
---
# Dataset Card for "instructpix2pix-1000-samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The dataset was created using the code from [this repository](https://github.com/sayakpaul/instruct-pix2pix-dataset). |
jojo0217/korean_rlhf_dataset | 2023-09-25T08:36:04.000Z | [
"task_categories:text-generation",
"language:ko",
"license:apache-2.0",
"region:us"
] | jojo0217 | null | null | null | 7 | 829 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- ko
---
성균관대학교 산학협력프로젝트 과정에서 한국어 llm 모델 SFT 학습을 위해 구축한 데이터셋 입니다.
2023-09-25
오픈 어시스턴트 data에서 오픈 어시스턴트를 포함하는 데이터 삭제
-> 답변에 오픈 어시스턴트라고 하는 경우가 나오기 때문
또한 스탠포드 대학 번역 데이터에서 번역 과정 오류로 input에 입력없음 과 같이 추가된 부분 삭제
그리고 \<unk\> 등으로 gpt 상에서 번역 오류가 난 것들을 삭제
***
자연스러움을 위해 stanford alpaca data, oig_chip2를 ChatGPT3.5 turbo 16k를 이용하여 새롭게 전처리 과정을 거쳤습니다.
https://github.com/JoJo0217/rlhf_korean_dataset/tree/main
여기에서 자세한 설명을 볼 수 있으며
데이터의 구성은 다음과 같습니다.
***
데이터 구성
|데이터 종류|개수|url|
|:---|---:|---:|
|koalpaca v1.1|21155|https://github.com/Beomi/KoAlpaca|
|stanford alpaca|51374|https://huggingface.co/datasets/tatsu-lab/alpaca|
|dolly|15009|https://huggingface.co/datasets/nlpai-lab/databricks-dolly-15k-ko|
|openassistant|9651|https://huggingface.co/datasets/nlpai-lab/openassistant-guanaco-ko|
|oig_chip2|10000|https://huggingface.co/datasets/0-hero/OIG-small-chip2|
|총합|107189||
|
lbox/lbox_open | 2022-11-09T06:41:26.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | lbox | null | null | null | 2 | 827 | ---
license: cc-by-nc-4.0
---
# Dataset Card for `lbox_open`
## Dataset Description
- **Homepage:** `https://lbox.kr`
- **Repository:** `https://github.com/lbox-kr/lbox_open`
- **Point of Contact:** [Wonseok Hwang](mailto:wonseok.hwang@lbox.kr)
### Dataset Summary
A Legal AI Benchmark Dataset from Korean Legal Cases.
### Languages
Korean
### How to use
```python
from datasets import load_dataset
# casename classficiation task
data_cn = load_dataset("lbox/lbox_open", "casename_classification")
data_cn_plus = load_dataset("lbox/lbox_open", "casename_classification_plus")
# statutes classification task
data_st = load_dataset("lbox/lbox_open", "statute_classification")
data_st_plus = load_dataset("lbox/lbox_open", "statute_classification_plus")
# Legal judgement prediction tasks
data_ljp_criminal = load_dataset("lbox/lbox_open", "ljp_criminal")
data_ljp_civil = load_dataset("lbox/lbox_open", "ljp_civil")
# case summarization task
data_summ = load_dataset("lbox/lbox_open", "summarization")
data_summ_plus = load_dataset("lbox/lbox_open", "summarization_plus")
# precedent corpus
data_corpus = load_dataset("lbox/lbox_open", "precedent_corpus")
```
For more information about the dataset, please visit <https://github.com/lbox-kr/lbox_open>.
## Licensing Information
Copyright 2022-present [LBox Co. Ltd.](https://lbox.kr/)
Licensed under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) |
aharley/rvl_cdip | 2023-05-02T09:06:16.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|iit_cdip",
"language:en",
"license:other",
"arxiv:1502.07058",
"region:us"
] | aharley | The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. | @inproceedings{harley2015icdar,
title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval},
author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis},
booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}},
year = {2015}
} | null | 28 | 827 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|iit_cdip
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: rvl-cdip
pretty_name: RVL-CDIP
viewer: false
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': letter
'1': form
'2': email
'3': handwritten
'4': advertisement
'5': scientific report
'6': scientific publication
'7': specification
'8': file folder
'9': news article
'10': budget
'11': invoice
'12': presentation
'13': questionnaire
'14': resume
'15': memo
splits:
- name: train
num_bytes: 38816373360
num_examples: 320000
- name: test
num_bytes: 4863300853
num_examples: 40000
- name: validation
num_bytes: 4868685208
num_examples: 40000
download_size: 38779484559
dataset_size: 48548359421
---
# Dataset Card for RVL-CDIP
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [The RVL-CDIP Dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/)
- **Repository:**
- **Paper:** [Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval](https://arxiv.org/abs/1502.07058)
- **Leaderboard:** [RVL-CDIP leaderboard](https://paperswithcode.com/dataset/rvl-cdip)
- **Point of Contact:** [Adam W. Harley](mailto:aharley@cmu.edu)
### Dataset Summary
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available [here](https://paperswithcode.com/sota/document-image-classification-on-rvl-cdip).
### Languages
All the classes and documents use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the training set is provided below :
```
{
'image': <PIL.TiffImagePlugin.TiffImageFile image mode=L size=754x1000 at 0x7F9A5E92CA90>,
'label': 15
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing a document.
- `label`: an `int` classification label.
<details>
<summary>Class Label Mappings</summary>
```json
{
"0": "letter",
"1": "form",
"2": "email",
"3": "handwritten",
"4": "advertisement",
"5": "scientific report",
"6": "scientific publication",
"7": "specification",
"8": "file folder",
"9": "news article",
"10": "budget",
"11": "invoice",
"12": "presentation",
"13": "questionnaire",
"14": "resume",
"15": "memo"
}
```
</details>
### Data Splits
| |train|test|validation|
|----------|----:|----:|---------:|
|# of examples|320000|40000|40000|
The dataset was split in proportions similar to those of ImageNet.
- 320000 images were used for training,
- 40000 images for validation, and
- 40000 images for testing.
## Dataset Creation
### Curation Rationale
From the paper:
> This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000
document images across 16 categories, useful for training new CNNs for document analysis.
### Source Data
#### Initial Data Collection and Normalization
The same as in the IIT-CDIP collection.
#### Who are the source language producers?
The same as in the IIT-CDIP collection.
### Annotations
#### Annotation process
The same as in the IIT-CDIP collection.
#### Who are the annotators?
The same as in the IIT-CDIP collection.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis.
### Licensing Information
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
### Citation Information
```bibtex
@inproceedings{harley2015icdar,
title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval},
author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis},
booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}},
year = {2015}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. |
allenai/scirepeval_test | 2022-10-21T20:54:57.000Z | [
"region:us"
] | allenai | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2021}
} | null | 0 | 826 | Entry not found |
TREC-AToMiC/AToMiC-Texts-v0.2.1 | 2023-05-04T18:58:43.000Z | [
"region:us"
] | TREC-AToMiC | null | null | null | 2 | 826 | ---
dataset_info:
features:
- name: text_id
dtype: string
- name: page_url
dtype: string
- name: page_title
dtype: string
- name: section_title
dtype: string
- name: context_page_description
dtype: string
- name: context_section_description
dtype: string
- name: media
sequence: string
- name: hierachy
sequence: string
- name: category
sequence: string
- name: source_id
dtype: string
splits:
- name: train
num_bytes: 20393084595
num_examples: 10134744
download_size: 7192298025
dataset_size: 20393084595
---
# Dataset Card for "AToMiC-Texts-v0.2.updated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
leemeng/jcommonsenseqa-v1.1 | 2023-04-28T08:13:50.000Z | [
"license:cc-by-4.0",
"region:us"
] | leemeng | null | null | null | 1 | 825 | ---
license: cc-by-4.0
dataset_info:
features:
- name: q_id
dtype: int64
- name: question
dtype: string
- name: choice0
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: choice3
dtype: string
- name: choice4
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1183829
num_examples: 8939
- name: validation
num_bytes: 148293
num_examples: 1119
download_size: 887894
dataset_size: 1332122
---
|
universal_morphologies | 2023-06-08T09:28:28.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:ady",
"language:ang",
"language:ar",
"language:arn",
"language:ast",
"language:az",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:ca",
"language:ckb",
"language:crh",
"language:cs",
"language:csb",
"language:cu",
"language:cy",
"language:da",
"language:de",
"language:dsb",
"language:el",
"language:en",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:frm",
"language:fro",
"language:frr",
"language:fur",
"language:fy",
"language:ga",
"language:gal",
"language:gd",
"language:gmh",
"language:gml",
"language:got",
"language:grc",
"language:gv",
"language:hai",
"language:he",
"language:hi",
"language:hu",
"language:hy",
"language:is",
"language:it",
"language:izh",
"language:ka",
"language:kbd",
"language:kjh",
"language:kk",
"language:kl",
"language:klr",
"language:kmr",
"language:kn",
"language:krl",
"language:kw",
"language:la",
"language:liv",
"language:lld",
"language:lt",
"language:lud",
"language:lv",
"language:mk",
"language:mt",
"language:mwf",
"language:nap",
"language:nb",
"language:nds",
"language:nl",
"language:nn",
"language:nv",
"language:oc",
"language:olo",
"language:osx",
"language:pl",
"language:ps",
"language:pt",
"language:qu",
"language:ro",
"language:ru",
"language:sa",
"language:sga",
"language:sh",
"language:sl",
"language:sme",
"language:sq",
"language:sv",
"language:swc",
"language:syc",
"language:te",
"language:tg",
"language:tk",
"language:tr",
"language:tt",
"language:uk",
"language:ur",
"language:uz",
"language:vec",
"language:vep",
"language:vot",
"language:xcl",
"language:xno",
"language:yi",
"language:zu",
"license:cc-by-sa-3.0",
"morphology",
"region:us"
] | null | The Universal Morphology (UniMorph) project is a collaborative effort to improve how NLP handles complex morphology in the world’s languages.
The goal of UniMorph is to annotate morphological data in a universal schema that allows an inflected word from any language to be defined by its lexical meaning,
typically carried by the lemma, and by a rendering of its inflectional form in terms of a bundle of morphological features from our schema.
The specification of the schema is described in Sylak-Glassman (2016). | @article{sylak2016composition,
title={The composition and use of the universal morphological feature schema (unimorph schema)},
author={Sylak-Glassman, John},
journal={Johns Hopkins University},
year={2016}
} | null | 13 | 824 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ady
- ang
- ar
- arn
- ast
- az
- ba
- be
- bg
- bn
- bo
- br
- ca
- ckb
- crh
- cs
- csb
- cu
- cy
- da
- de
- dsb
- el
- en
- es
- et
- eu
- fa
- fi
- fo
- fr
- frm
- fro
- frr
- fur
- fy
- ga
- gal
- gd
- gmh
- gml
- got
- grc
- gv
- hai
- he
- hi
- hu
- hy
- is
- it
- izh
- ka
- kbd
- kjh
- kk
- kl
- klr
- kmr
- kn
- krl
- kw
- la
- liv
- lld
- lt
- lud
- lv
- mk
- mt
- mwf
- nap
- nb
- nds
- nl
- nn
- nv
- oc
- olo
- osx
- pl
- ps
- pt
- qu
- ro
- ru
- sa
- sga
- sh
- sl
- sme
- sq
- sv
- swc
- syc
- te
- tg
- tk
- tr
- tt
- uk
- ur
- uz
- vec
- vep
- vot
- xcl
- xno
- yi
- zu
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- token-classification
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
paperswithcode_id: null
pretty_name: UniversalMorphologies
tags:
- morphology
dataset_info:
- config_name: ady
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 3428235
num_examples: 1666
download_size: 1008487
dataset_size: 3428235
- config_name: ang
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 6569844
num_examples: 1867
download_size: 1435972
dataset_size: 6569844
- config_name: ara
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 24388295
num_examples: 4134
download_size: 7155824
dataset_size: 24388295
- config_name: arn
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 124050
num_examples: 26
download_size: 20823
dataset_size: 124050
- config_name: ast
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 4913008
num_examples: 436
download_size: 1175901
dataset_size: 4913008
- config_name: aze
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1248687
num_examples: 340
download_size: 276306
dataset_size: 1248687
- config_name: bak
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1984657
num_examples: 1084
download_size: 494758
dataset_size: 1984657
- config_name: bel
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 2626405
num_examples: 1027
download_size: 739537
dataset_size: 2626405
- config_name: ben
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 746181
num_examples: 136
download_size: 251991
dataset_size: 746181
- config_name: bod
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 880074
num_examples: 1335
download_size: 197523
dataset_size: 880074
- config_name: bre
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 387583
num_examples: 44
download_size: 82159
dataset_size: 387583
- config_name: bul
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 9589915
num_examples: 2468
download_size: 3074574
dataset_size: 9589915
- config_name: cat
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 12988492
num_examples: 1547
download_size: 2902458
dataset_size: 12988492
- config_name: ces
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 21056640
num_examples: 5125
download_size: 4875288
dataset_size: 21056640
- config_name: chu
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 628237
num_examples: 152
download_size: 149081
dataset_size: 628237
- config_name: ckb
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 3843267
num_examples: 274
download_size: 914302
dataset_size: 3843267
- config_name: cor
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 83434
num_examples: 9
download_size: 17408
dataset_size: 83434
- config_name: crh
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1154595
num_examples: 1230
download_size: 186325
dataset_size: 1154595
- config_name: csb
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 82172
num_examples: 37
download_size: 14259
dataset_size: 82172
- config_name: cym
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1748431
num_examples: 183
download_size: 374501
dataset_size: 1748431
- config_name: dan
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 4204551
num_examples: 3193
download_size: 845939
dataset_size: 4204551
- config_name: deu
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 28436466
num_examples: 15060
download_size: 5966618
dataset_size: 28436466
- config_name: dsb
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 2985168
num_examples: 994
download_size: 536096
dataset_size: 2985168
- config_name: ell
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 34112450
num_examples: 11906
download_size: 11222248
dataset_size: 34112450
- config_name: eng
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 18455909
num_examples: 22765
download_size: 3285554
dataset_size: 18455909
- config_name: est
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 6125879
num_examples: 886
download_size: 1397385
dataset_size: 6125879
- config_name: eus
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 2444247
num_examples: 26
download_size: 876480
dataset_size: 2444247
- config_name: fao
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 7117926
num_examples: 3077
download_size: 1450065
dataset_size: 7117926
- config_name: fas
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 6382709
num_examples: 273
download_size: 2104724
dataset_size: 6382709
- config_name: fin
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: '1'
num_bytes: 331855860
num_examples: 46152
- name: '2'
num_bytes: 81091817
num_examples: 11491
download_size: 109324828
dataset_size: 412947677
- config_name: fra
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 58747699
num_examples: 7535
download_size: 13404983
dataset_size: 58747699
- config_name: frm
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 6015940
num_examples: 603
download_size: 1441122
dataset_size: 6015940
- config_name: fro
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 20260793
num_examples: 1700
download_size: 4945582
dataset_size: 20260793
- config_name: frr
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 526898
num_examples: 51
download_size: 112236
dataset_size: 526898
- config_name: fry
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 222067
num_examples: 85
download_size: 38227
dataset_size: 222067
- config_name: fur
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1282374
num_examples: 168
download_size: 258793
dataset_size: 1282374
- config_name: gal
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 5844604
num_examples: 486
download_size: 1259120
dataset_size: 5844604
- config_name: gla
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 126847
num_examples: 73
download_size: 25025
dataset_size: 126847
- config_name: gle
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 17065939
num_examples: 7464
download_size: 3853188
dataset_size: 17065939
- config_name: glv
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 7523
num_examples: 1
download_size: 401
dataset_size: 7523
- config_name: gmh
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 114677
num_examples: 29
download_size: 20851
dataset_size: 114677
- config_name: gml
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 233831
num_examples: 52
download_size: 47151
dataset_size: 233831
- config_name: got
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
download_size: 2
dataset_size: 0
- config_name: grc
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 6779867
num_examples: 2431
download_size: 2057514
dataset_size: 6779867
- config_name: hai
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1166240
num_examples: 41
download_size: 329817
dataset_size: 1166240
- config_name: hbs
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 132933961
num_examples: 24419
download_size: 32194142
dataset_size: 132933961
- config_name: heb
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 2211208
num_examples: 510
download_size: 498065
dataset_size: 2211208
- config_name: hin
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 10083004
num_examples: 258
download_size: 3994359
dataset_size: 10083004
- config_name: hun
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 83517327
num_examples: 14892
download_size: 19544319
dataset_size: 83517327
- config_name: hye
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 56537127
num_examples: 7033
download_size: 17810316
dataset_size: 56537127
- config_name: isl
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 12120572
num_examples: 4775
download_size: 2472980
dataset_size: 12120572
- config_name: ita
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 81905203
num_examples: 10009
download_size: 19801423
dataset_size: 81905203
- config_name: izh
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 170094
num_examples: 50
download_size: 28558
dataset_size: 170094
- config_name: kal
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 60434
num_examples: 23
download_size: 9795
dataset_size: 60434
- config_name: kan
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1052294
num_examples: 159
download_size: 318512
dataset_size: 1052294
- config_name: kat
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 12532540
num_examples: 3782
download_size: 4678979
dataset_size: 12532540
- config_name: kaz
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 62519
num_examples: 26
download_size: 14228
dataset_size: 62519
- config_name: kbd
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 511406
num_examples: 250
download_size: 133788
dataset_size: 511406
- config_name: kjh
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 193741
num_examples: 75
download_size: 44907
dataset_size: 193741
- config_name: klr
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 28909688
num_examples: 591
download_size: 7561829
dataset_size: 28909688
- config_name: kmr
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 35504487
num_examples: 15083
download_size: 8592722
dataset_size: 35504487
- config_name: krl
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 106475
num_examples: 20
download_size: 19024
dataset_size: 106475
- config_name: lat
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 81932667
num_examples: 17214
download_size: 19567252
dataset_size: 81932667
- config_name: lav
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 21219584
num_examples: 7548
download_size: 5048680
dataset_size: 21219584
- config_name: lit
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 5287268
num_examples: 1458
download_size: 1191554
dataset_size: 5287268
- config_name: liv
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 642166
num_examples: 203
download_size: 141467
dataset_size: 642166
- config_name: lld
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1240257
num_examples: 180
download_size: 278592
dataset_size: 1240257
- config_name: lud
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: mikhailovskoye
num_bytes: 11361
num_examples: 2
- name: new_written
num_bytes: 35132
num_examples: 94
- name: southern_ludian_svjatozero
num_bytes: 57276
num_examples: 71
download_size: 14697
dataset_size: 103769
- config_name: mkd
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 27800390
num_examples: 10313
download_size: 8157589
dataset_size: 27800390
- config_name: mlt
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 604577
num_examples: 112
download_size: 124584
dataset_size: 604577
- config_name: mwf
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 172890
num_examples: 29
download_size: 25077
dataset_size: 172890
- config_name: nap
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 293699
num_examples: 40
download_size: 64163
dataset_size: 293699
- config_name: nav
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 2051393
num_examples: 674
download_size: 523673
dataset_size: 2051393
- config_name: nds
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
download_size: 2
dataset_size: 0
- config_name: nld
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 8813867
num_examples: 4993
download_size: 1874427
dataset_size: 8813867
- config_name: nno
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 2704566
num_examples: 4689
download_size: 420695
dataset_size: 2704566
- config_name: nob
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 3359706
num_examples: 5527
download_size: 544432
dataset_size: 3359706
- config_name: oci
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1327716
num_examples: 174
download_size: 276611
dataset_size: 1327716
- config_name: olo
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: kotkozero
num_bytes: 7682
num_examples: 5
- name: new_written
num_bytes: 11158424
num_examples: 15293
- name: syamozero
num_bytes: 6379
num_examples: 2
- name: vedlozero
num_bytes: 6120
num_examples: 1
- name: vidlitsa
num_bytes: 54363
num_examples: 3
download_size: 2130154
dataset_size: 11232968
- config_name: osx
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 3500590
num_examples: 863
download_size: 759997
dataset_size: 3500590
- config_name: pol
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 30855235
num_examples: 10185
download_size: 6666266
dataset_size: 30855235
- config_name: por
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 48530106
num_examples: 4001
download_size: 10982524
dataset_size: 48530106
- config_name: pus
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1176421
num_examples: 395
download_size: 297043
dataset_size: 1176421
- config_name: que
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 27823298
num_examples: 1006
download_size: 6742890
dataset_size: 27823298
- config_name: ron
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 13187957
num_examples: 4405
download_size: 2990521
dataset_size: 13187957
- config_name: rus
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 77484460
num_examples: 28068
download_size: 25151401
dataset_size: 77484460
- config_name: san
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 5500001
num_examples: 917
download_size: 1788739
dataset_size: 5500001
- config_name: sga
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 190479
num_examples: 49
download_size: 43469
dataset_size: 190479
- config_name: slv
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 9071547
num_examples: 2535
download_size: 1911039
dataset_size: 9071547
- config_name: sme
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 9764653
num_examples: 2103
download_size: 2050015
dataset_size: 9764653
- config_name: spa
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 61472202
num_examples: 5460
download_size: 14386131
dataset_size: 61472202
- config_name: sqi
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 5422400
num_examples: 589
download_size: 1261468
dataset_size: 5422400
- config_name: swc
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1694529
num_examples: 100
download_size: 414624
dataset_size: 1694529
- config_name: swe
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 12897827
num_examples: 10553
download_size: 2709960
dataset_size: 12897827
- config_name: syc
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 553392
num_examples: 160
download_size: 130000
dataset_size: 553392
- config_name: tat
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1203356
num_examples: 1283
download_size: 194277
dataset_size: 1203356
- config_name: tel
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 285769
num_examples: 127
download_size: 95069
dataset_size: 285769
- config_name: tgk
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 25276
num_examples: 75
download_size: 2366
dataset_size: 25276
- config_name: tuk
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 127712
num_examples: 68
download_size: 20540
dataset_size: 127712
- config_name: tur
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 44723850
num_examples: 3579
download_size: 11552946
dataset_size: 44723850
- config_name: ukr
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 3299187
num_examples: 1493
download_size: 870660
dataset_size: 3299187
- config_name: urd
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 2197237
num_examples: 182
download_size: 685613
dataset_size: 2197237
- config_name: uzb
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 196802
num_examples: 15
download_size: 41921
dataset_size: 196802
- config_name: vec
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 2892987
num_examples: 368
download_size: 615931
dataset_size: 2892987
- config_name: vep
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: central_eastern
num_bytes: 500981
num_examples: 65
- name: central_western
num_bytes: 2527618
num_examples: 111
- name: new_written
num_bytes: 79899484
num_examples: 9304
- name: northern
num_bytes: 175242
num_examples: 21
- name: southern
num_bytes: 206289
num_examples: 17
download_size: 20131151
dataset_size: 83309614
- config_name: vot
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 217663
num_examples: 55
download_size: 37179
dataset_size: 217663
- config_name: xcl
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 16856327
num_examples: 4300
download_size: 4950513
dataset_size: 16856327
- config_name: xno
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 48938
num_examples: 5
download_size: 9641
dataset_size: 48938
- config_name: yid
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 1409582
num_examples: 803
download_size: 429391
dataset_size: 1409582
- config_name: zul
features:
- name: lemma
dtype: string
- name: forms
sequence:
- name: word
dtype: string
- name: Aktionsart
sequence:
class_label:
names:
0: STAT
1: DYN
2: TEL
3: ATEL
4: PCT
5: DUR
6: ACH
7: ACCMP
8: SEMEL
9: ACTY
- name: Animacy
sequence:
class_label:
names:
0: ANIM
1: INAN
2: HUM
3: NHUM
- name: Argument_Marking
sequence:
class_label:
names:
0: ARGNO1S
1: ARGNO2S
2: ARGNO3S
3: ARGNO1P
4: ARGNO2P
5: ARGNO3P
6: ARGAC1S
7: ARGAC2S
8: ARGAC3S
9: ARGAC1P
10: ARGAC2P
11: ARGAC3P
12: ARGAB1S
13: ARGAB2S
14: ARGAB3S
15: ARGAB1P
16: ARGAB2P
17: ARGAB3P
18: ARGER1S
19: ARGER2S
20: ARGER3S
21: ARGER1P
22: ARGER2P
23: ARGER3P
24: ARGDA1S
25: ARGDA2S
26: ARGDA3S
27: ARGDA1P
28: ARGDA2P
29: ARGDA3P
30: ARGBE1S
31: ARGBE2S
32: ARGBE3S
33: ARGBE1P
34: ARGBE2P
35: ARGBE3P
- name: Aspect
sequence:
class_label:
names:
0: IPFV
1: PFV
2: PRF
3: PROG
4: PROSP
5: ITER
6: HAB
- name: Case
sequence:
class_label:
names:
0: NOM
1: ACC
2: ERG
3: ABS
4: NOMS
5: DAT
6: BEN
7: PRP
8: GEN
9: REL
10: PRT
11: INS
12: COM
13: VOC
14: COMPV
15: EQTV
16: PRIV
17: PROPR
18: AVR
19: FRML
20: TRANS
21: BYWAY
22: INTER
23: AT
24: POST
25: IN
26: CIRC
27: ANTE
28: APUD
29: 'ON'
30: ONHR
31: ONVR
32: SUB
33: REM
34: PROXM
35: ESS
36: ALL
37: ABL
38: APPRX
39: TERM
- name: Comparison
sequence:
class_label:
names:
0: CMPR
1: SPRL
2: AB
3: RL
4: EQT
- name: Definiteness
sequence:
class_label:
names:
0: DEF
1: INDF
2: SPEC
3: NSPEC
- name: Deixis
sequence:
class_label:
names:
0: PROX
1: MED
2: REMT
3: REF1
4: REF2
5: NOREF
6: PHOR
7: VIS
8: NVIS
9: ABV
10: EVEN
11: BEL
- name: Evidentiality
sequence:
class_label:
names:
0: FH
1: DRCT
2: SEN
3: VISU
4: NVSEN
5: AUD
6: NFH
7: QUOT
8: RPRT
9: HRSY
10: INFER
11: ASSUM
- name: Finiteness
sequence:
class_label:
names:
0: FIN
1: NFIN
- name: Gender
sequence:
class_label:
names:
0: MASC
1: FEM
2: NEUT
3: NAKH1
4: NAKH2
5: NAKH3
6: NAKH4
7: NAKH5
8: NAKH6
9: NAKH7
10: NAKH8
11: BANTU1
12: BANTU2
13: BANTU3
14: BANTU4
15: BANTU5
16: BANTU6
17: BANTU7
18: BANTU8
19: BANTU9
20: BANTU10
21: BANTU11
22: BANTU12
23: BANTU13
24: BANTU14
25: BANTU15
26: BANTU16
27: BANTU17
28: BANTU18
29: BANTU19
30: BANTU20
31: BANTU21
32: BANTU22
33: BANTU23
- name: Information_Structure
sequence:
class_label:
names:
0: TOP
1: FOC
- name: Interrogativity
sequence:
class_label:
names:
0: DECL
1: INT
- name: Language_Specific
sequence:
class_label:
names:
0: LGSPEC1
1: LGSPEC2
2: LGSPEC3
3: LGSPEC4
4: LGSPEC5
5: LGSPEC6
6: LGSPEC7
7: LGSPEC8
8: LGSPEC9
9: LGSPEC10
- name: Mood
sequence:
class_label:
names:
0: IND
1: SBJV
2: REAL
3: IRR
4: AUPRP
5: AUNPRP
6: IMP
7: COND
8: PURP
9: INTEN
10: POT
11: LKLY
12: ADM
13: OBLIG
14: DEB
15: PERM
16: DED
17: SIM
18: OPT
- name: Number
sequence:
class_label:
names:
0: SG
1: PL
2: GRPL
3: DU
4: TRI
5: PAUC
6: GRPAUC
7: INVN
- name: Part_Of_Speech
sequence:
class_label:
names:
0: N
1: PROPN
2: ADJ
3: PRO
4: CLF
5: ART
6: DET
7: V
8: ADV
9: AUX
10: V.PTCP
11: V.MSDR
12: V.CVB
13: ADP
14: COMP
15: CONJ
16: NUM
17: PART
18: INTJ
- name: Person
sequence:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: INCL
6: EXCL
7: PRX
8: OBV
- name: Polarity
sequence:
class_label:
names:
0: POS
1: NEG
- name: Politeness
sequence:
class_label:
names:
0: INFM
1: FORM
2: ELEV
3: HUMB
4: POL
5: AVOID
6: LOW
7: HIGH
8: STELEV
9: STSUPR
10: LIT
11: FOREG
12: COL
- name: Possession
sequence:
class_label:
names:
0: ALN
1: NALN
2: PSS1S
3: PSS2S
4: PSS2SF
5: PSS2SM
6: PSS2SINFM
7: PSS2SFORM
8: PSS3S
9: PSS3SF
10: PSS3SM
11: PSS1D
12: PSS1DI
13: PSS1DE
14: PSS2D
15: PSS2DM
16: PSS2DF
17: PSS3D
18: PSS3DF
19: PSS3DM
20: PSS1P
21: PSS1PI
22: PSS1PE
23: PSS2P
24: PSS2PF
25: PSS2PM
26: PSS3PF
27: PSS3PM
- name: Switch_Reference
sequence:
class_label:
names:
0: SS
1: SSADV
2: DS
3: DSADV
4: OR
5: SIMMA
6: SEQMA
7: LOG
- name: Tense
sequence:
class_label:
names:
0: PRS
1: PST
2: FUT
3: IMMED
4: HOD
5: 1DAY
6: RCT
7: RMT
- name: Valency
sequence:
class_label:
names:
0: IMPRS
1: INTR
2: TR
3: DITR
4: REFL
5: RECP
6: CAUS
7: APPL
- name: Voice
sequence:
class_label:
names:
0: ACT
1: MID
2: PASS
3: ANTIP
4: DIR
5: INV
6: AGFOC
7: PFOC
8: LFOC
9: BFOC
10: ACFOC
11: IFOC
12: CFOC
- name: Other
sequence: string
splits:
- name: train
num_bytes: 7152507
num_examples: 566
download_size: 1581402
dataset_size: 7152507
config_names:
- ady
- ang
- ara
- arn
- ast
- aze
- bak
- bel
- ben
- bod
- bre
- bul
- cat
- ces
- chu
- ckb
- cor
- crh
- csb
- cym
- dan
- deu
- dsb
- ell
- eng
- est
- eus
- fao
- fas
- fin
- fra
- frm
- fro
- frr
- fry
- fur
- gal
- gla
- gle
- glv
- gmh
- gml
- got
- grc
- hai
- hbs
- heb
- hin
- hun
- hye
- isl
- ita
- izh
- kal
- kan
- kat
- kaz
- kbd
- kjh
- klr
- kmr
- krl
- lat
- lav
- lit
- liv
- lld
- lud
- mkd
- mlt
- mwf
- nap
- nav
- nds
- nld
- nno
- nob
- oci
- olo
- osx
- pol
- por
- pus
- que
- ron
- rus
- san
- sga
- slv
- sme
- spa
- sqi
- swc
- swe
- syc
- tat
- tel
- tgk
- tuk
- tur
- ukr
- urd
- uzb
- vec
- vep
- vot
- xcl
- xno
- yid
- zul
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [UniMorph Homepage](https://unimorph.github.io/)
- **Repository:** [List of UniMorph repositories](https://github.com/unimorph)
- **Paper:** [The Composition and Use of the Universal Morphological Feature Schema (UniMorph Schema)](https://unimorph.github.io/doc/unimorph-schema.pdf)
- **Point of Contact:** [Arya McCarthy](mailto:arya@jhu.edu)
### Dataset Summary
The Universal Morphology (UniMorph) project is a collaborative effort to improve how NLP handles complex morphology in the world’s languages.
The goal of UniMorph is to annotate morphological data in a universal schema that allows an inflected word from any language to be defined by its lexical meaning,
typically carried by the lemma, and by a rendering of its inflectional form in terms of a bundle of morphological features from our schema.
The specification of the schema is described in Sylak-Glassman (2016).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The current version of the UniMorph dataset covers 110 languages.
## Dataset Structure
### Data Instances
Each data instance comprises of a lemma and a set of possible realizations with morphological and meaning annotations. For example:
```
{'forms': {'Aktionsart': [[], [], [], [], []],
'Animacy': [[], [], [], [], []],
...
'Finiteness': [[], [], [], [1], []],
...
'Number': [[], [], [0], [], []],
'Other': [[], [], [], [], []],
'Part_Of_Speech': [[7], [10], [7], [7], [10]],
...
'Tense': [[1], [1], [0], [], [0]],
...
'word': ['ablated', 'ablated', 'ablates', 'ablate', 'ablating']},
'lemma': 'ablate'}
```
### Data Fields
Each instance in the dataset has the following fields:
- `lemma`: the common lemma for all all_forms
- `forms`: all annotated forms for this lemma, with:
- `word`: the full word form
- [`category`]: a categorical variable denoting one or several tags in a category (several to represent composite tags, originally denoted with `A+B`). The full list of categories and possible tags for each can be found [here](https://github.com/unimorph/unimorph.github.io/blob/master/unimorph-schema-json/dimensions-to-features.json)
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
mteb/amazon_polarity | 2022-09-27T19:11:44.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 824 | ---
language:
- en
--- |
codah | 2023-01-25T14:28:20.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. Our experimental results show that CODAH questions present a complementary extension to the SWAG dataset, testing additional modes of common sense. | @inproceedings{chen2019codah,
title={CODAH: An Adversarially-Authored Question Answering Dataset for Common Sense},
author={Chen, Michael and D'Arcy, Mike and Liu, Alisa and Fernandez, Jared and Downey, Doug},
booktitle={Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP},
pages={63--69},
year={2019}
} | null | 4 | 822 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: codah
pretty_name: COmmonsense Dataset Adversarially-authored by Humans
dataset_info:
- config_name: codah
features:
- name: id
dtype: int32
- name: question_category
dtype:
class_label:
names:
'0': Idioms
'1': Reference
'2': Polysemy
'3': Negation
'4': Quantitative
'5': Others
- name: question_propmt
dtype: string
- name: candidate_answers
sequence: string
- name: correct_answer_idx
dtype: int32
splits:
- name: train
num_bytes: 571208
num_examples: 2776
download_size: 485130
dataset_size: 571208
- config_name: fold_0
features:
- name: id
dtype: int32
- name: question_category
dtype:
class_label:
names:
'0': Idioms
'1': Reference
'2': Polysemy
'3': Negation
'4': Quantitative
'5': Others
- name: question_propmt
dtype: string
- name: candidate_answers
sequence: string
- name: correct_answer_idx
dtype: int32
splits:
- name: train
num_bytes: 344912
num_examples: 1665
- name: validation
num_bytes: 114211
num_examples: 556
- name: test
num_bytes: 112109
num_examples: 555
download_size: 485130
dataset_size: 571232
- config_name: fold_1
features:
- name: id
dtype: int32
- name: question_category
dtype:
class_label:
names:
'0': Idioms
'1': Reference
'2': Polysemy
'3': Negation
'4': Quantitative
'5': Others
- name: question_propmt
dtype: string
- name: candidate_answers
sequence: string
- name: correct_answer_idx
dtype: int32
splits:
- name: train
num_bytes: 340990
num_examples: 1665
- name: validation
num_bytes: 114211
num_examples: 556
- name: test
num_bytes: 116031
num_examples: 555
download_size: 485130
dataset_size: 571232
- config_name: fold_2
features:
- name: id
dtype: int32
- name: question_category
dtype:
class_label:
names:
'0': Idioms
'1': Reference
'2': Polysemy
'3': Negation
'4': Quantitative
'5': Others
- name: question_propmt
dtype: string
- name: candidate_answers
sequence: string
- name: correct_answer_idx
dtype: int32
splits:
- name: train
num_bytes: 342293
num_examples: 1665
- name: validation
num_bytes: 114211
num_examples: 556
- name: test
num_bytes: 114728
num_examples: 555
download_size: 485130
dataset_size: 571232
- config_name: fold_3
features:
- name: id
dtype: int32
- name: question_category
dtype:
class_label:
names:
'0': Idioms
'1': Reference
'2': Polysemy
'3': Negation
'4': Quantitative
'5': Others
- name: question_propmt
dtype: string
- name: candidate_answers
sequence: string
- name: correct_answer_idx
dtype: int32
splits:
- name: train
num_bytes: 342844
num_examples: 1665
- name: validation
num_bytes: 114211
num_examples: 556
- name: test
num_bytes: 114177
num_examples: 555
download_size: 485130
dataset_size: 571232
- config_name: fold_4
features:
- name: id
dtype: int32
- name: question_category
dtype:
class_label:
names:
'0': Idioms
'1': Reference
'2': Polysemy
'3': Negation
'4': Quantitative
'5': Others
- name: question_propmt
dtype: string
- name: candidate_answers
sequence: string
- name: correct_answer_idx
dtype: int32
splits:
- name: train
num_bytes: 342844
num_examples: 1665
- name: validation
num_bytes: 114177
num_examples: 555
- name: test
num_bytes: 114211
num_examples: 556
download_size: 485130
dataset_size: 571232
---
# Dataset Card for COmmonsense Dataset Adversarially-authored by Humans
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
mlsum | 2023-06-01T14:59:54.000Z | [
"task_categories:summarization",
"task_categories:translation",
"task_categories:text-classification",
"task_ids:news-articles-summarization",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:extended|cnn_dailymail",
"source_datasets:original",
"language:de",
"language:es",
"language:fr",
"language:ru",
"language:tr",
"license:other",
"region:us"
] | null | We present MLSUM, the first large-scale MultiLingual SUMmarization dataset.
Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish.
Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community.
We report cross-lingual comparative analyses based on state-of-the-art systems.
These highlight existing biases which motivate the use of a multi-lingual dataset. | @article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2004.14900},
year={2020}
} | null | 24 | 822 | ---
annotations_creators:
- found
language_creators:
- found
language:
- de
- es
- fr
- ru
- tr
license:
- other
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- extended|cnn_dailymail
- original
task_categories:
- summarization
- translation
- text-classification
task_ids:
- news-articles-summarization
- multi-class-classification
- multi-label-classification
- topic-classification
paperswithcode_id: mlsum
pretty_name: MLSUM
dataset_info:
- config_name: de
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 846959840
num_examples: 220887
- name: validation
num_bytes: 47119541
num_examples: 11394
- name: test
num_bytes: 46847612
num_examples: 10701
download_size: 1005814154
dataset_size: 940926993
- config_name: es
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 1214558302
num_examples: 266367
- name: validation
num_bytes: 50643400
num_examples: 10358
- name: test
num_bytes: 71263665
num_examples: 13920
download_size: 1456211154
dataset_size: 1336465367
- config_name: fr
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 1471965014
num_examples: 392902
- name: validation
num_bytes: 70413212
num_examples: 16059
- name: test
num_bytes: 69660288
num_examples: 15828
download_size: 1849565564
dataset_size: 1612038514
- config_name: ru
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 257389497
num_examples: 25556
- name: validation
num_bytes: 9128497
num_examples: 750
- name: test
num_bytes: 9656398
num_examples: 757
download_size: 766226107
dataset_size: 276174392
- config_name: tu
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 641622783
num_examples: 249277
- name: validation
num_bytes: 25530661
num_examples: 11565
- name: test
num_bytes: 27830212
num_examples: 12775
download_size: 942308960
dataset_size: 694983656
config_names:
- de
- es
- fr
- ru
- tu
---
# Dataset Card for MLSUM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** []()
- **Repository:** https://github.com/recitalAI/MLSUM
- **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.647/
- **Point of Contact:** [email](thomas@recital.ai)
- **Size of downloaded dataset files:** 1.83 GB
- **Size of the generated dataset:** 4.86 GB
- **Total amount of disk used:** 6.69 GB
### Dataset Summary
We present MLSUM, the first large-scale MultiLingual SUMmarization dataset.
Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish.
Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community.
We report cross-lingual comparative analyses based on state-of-the-art systems.
These highlight existing biases which motivate the use of a multi-lingual dataset.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### de
- **Size of downloaded dataset files:** 346.58 MB
- **Size of the generated dataset:** 940.93 MB
- **Total amount of disk used:** 1.29 GB
An example of 'validation' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### es
- **Size of downloaded dataset files:** 513.31 MB
- **Size of the generated dataset:** 1.34 GB
- **Total amount of disk used:** 1.85 GB
An example of 'validation' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### fr
- **Size of downloaded dataset files:** 619.99 MB
- **Size of the generated dataset:** 1.61 GB
- **Total amount of disk used:** 2.23 GB
An example of 'validation' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### ru
- **Size of downloaded dataset files:** 106.22 MB
- **Size of the generated dataset:** 276.17 MB
- **Total amount of disk used:** 382.39 MB
An example of 'train' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### tu
- **Size of downloaded dataset files:** 247.50 MB
- **Size of the generated dataset:** 694.99 MB
- **Total amount of disk used:** 942.48 MB
An example of 'train' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
### Data Fields
The data fields are the same among all splits.
#### de
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### es
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### fr
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### ru
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### tu
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
### Data Splits
|name|train |validation|test |
|----|-----:|---------:|----:|
|de |220887| 11394|10701|
|es |266367| 10358|13920|
|fr |392902| 16059|15828|
|ru | 25556| 750| 757|
|tu |249277| 11565|12775|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders. See https://github.com/recitalAI/MLSUM#mlsum
### Citation Information
```
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2004.14900},
year={2020}
}
```
### Contributions
Thanks to [@RachelKer](https://github.com/RachelKer), [@albertvillanova](https://github.com/albertvillanova), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
roszcz/maestro-v1-sustain | 2023-04-23T13:35:49.000Z | [
"region:us"
] | roszcz | null | null | null | 0 | 818 | ---
dataset_info:
features:
- name: notes
struct:
- name: duration
sequence: float64
- name: end
sequence: float64
- name: pitch
sequence: int64
- name: start
sequence: float64
- name: velocity
sequence: int64
- name: composer
dtype: string
- name: title
dtype: string
- name: year
dtype: int64
- name: midi_filename
dtype: string
splits:
- name: test
num_bytes: 29686362
num_examples: 177
- name: validation
num_bytes: 25599834
num_examples: 137
- name: train
num_bytes: 226534277
num_examples: 962
download_size: 87287914
dataset_size: 281820473
---
# Dataset Card for "maestro-v1-sustain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_snnxor_n15_l1_10 | 2023-09-18T21:51:32.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 817 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
- name: test
num_bytes: 236440000
num_examples: 10000
download_size: 432260994
dataset_size: 709320000
---
# Dataset Card for "autotree_snnxor_n15_l1_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mattymchen/celeba-hq | 2023-04-26T05:56:53.000Z | [
"region:us"
] | mattymchen | null | null | null | 0 | 816 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': female
'1': male
splits:
- name: train
num_bytes: 2731627350.0
num_examples: 28000
- name: validation
num_bytes: 197550788.0
num_examples: 2000
download_size: 2762109745
dataset_size: 2929178138.0
---
# Dataset Card for "celeba-hq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jeanlee/kmhas_korean_hate_speech | 2022-11-28T16:26:56.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"K-MHaS",
"Korean NLP",
"Hate Speech Detection",
"Dataset",
"Coling2022",
"arxiv:2208.10684",
"region:us"
] | jeanlee | The K-MHaS (Korean Multi-label Hate Speech) dataset contains 109k utterances from Korean online news comments labeled with 8 fine-grained hate speech classes or Not Hate Speech class.
The fine-grained hate speech classes are politics, origin, physical, age, gender, religion, race, and profanity and these categories are selected in order to reflect the social and historical context. | @inproceedings{lee-etal-2022-k,
title = "K-{MH}a{S}: A Multi-label Hate Speech Detection Dataset in {K}orean Online News Comment",
author = "Lee, Jean and
Lim, Taejun and
Lee, Heejun and
Jo, Bogeun and
Kim, Yangsok and
Yoon, Heegeun and
Han, Soyeon Caren",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.311",
pages = "3530--3538",
abstract = "Online hate speech detection has become an important issue due to the growth of online content, but resources in languages other than English are extremely limited. We introduce K-MHaS, a new multi-label dataset for hate speech detection that effectively handles Korean language patterns. The dataset consists of 109k utterances from news comments and provides a multi-label classification using 1 to 4 labels, and handles subjectivity and intersectionality. We evaluate strong baselines on K-MHaS. KR-BERT with a sub-character tokenizer outperforms others, recognizing decomposed characters in each hate speech class.",
} | null | 9 | 813 | ---
annotations_creators:
- crowdsourced
language:
- ko
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'K-MHaS'
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- K-MHaS
- Korean NLP
- Hate Speech Detection
- Dataset
- Coling2022
task_categories:
- text-classification
task_ids:
- multi-label-classification
- hate-speech-detection
paperswithcode_id: korean-multi-label-hate-speech-dataset
dataset_info:
features:
- name: text
dtype: string
- name: label
sequence:
class_label:
names:
0: origin
1: physical
2: politics
3: profanity
4: age
5: gender
6: race
7: religion
8: not_hate_speech
splits:
- name: train
num_bytes: 6845463
num_examples: 78977
- name: validation
num_bytes: 748899
num_examples: 8776
- name: test
num_bytes: 1902352
num_examples: 21939
download_size: 9496714
dataset_size: 109692
---
# Dataset Card for K-MHaS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Sample Code
<a href="https://colab.research.google.com/drive/171KhS1_LVBtpAFd_kaT8lcrZmhcz5ehY?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="base"/></a>
## Dataset Description
- **Homepage:** [K-MHaS](https://github.com/adlnlp/K-MHaS)
- **Repository:** [Korean Multi-label Hate Speech Dataset](https://github.com/adlnlp/K-MHaS)
- **Paper:** [K-MHaS: A Multi-label Hate Speech Detection Dataset in Korean Online News Comment](https://arxiv.org/abs/2208.10684)
- **Point of Contact:** [Caren Han](caren.han@sydney.edu.au)
- **Sample code:** [Colab](https://colab.research.google.com/drive/171KhS1_LVBtpAFd_kaT8lcrZmhcz5ehY?usp=sharing)
### Dataset Summary
The Korean Multi-label Hate Speech Dataset, **K-MHaS**, consists of 109,692 utterances from Korean online news comments, labelled with 8 fine-grained hate speech classes (labels: `Politics`, `Origin`, `Physical`, `Age`, `Gender`, `Religion`, `Race`, `Profanity`) or `Not Hate Speech` class. Each utterance provides from a single to four labels that can handles Korean language patterns effectively. For more details, please refer to our paper about [**K-MHaS**](https://aclanthology.org/2022.coling-1.311), published at COLING 2022.
### Supported Tasks and Leaderboards
Hate Speech Detection
* `binary classification` (labels: `Hate Speech`, `Not Hate Speech`)
* `multi-label classification`: (labels: `Politics`, `Origin`, `Physical`, `Age`, `Gender`, `Religion`, `Race`, `Profanity`, `Not Hate Speech`)
For the multi-label classification, a `Hate Speech` class from the binary classification, is broken down into eight classes, associated with the hate speech category. In order to reflect the social and historical context, we select the eight hate speech classes. For example, the `Politics` class is chosen, due to a significant influence on the style of Korean hate speech.
### Languages
Korean
## Dataset Structure
### Data Instances
The dataset is provided with train/validation/test set in the txt format. Each instance is a news comment with a corresponding one or more hate speech classes (labels: `Politics`, `Origin`, `Physical`, `Age`, `Gender`, `Religion`, `Race`, `Profanity`) or `Not Hate Speech` class. The label numbers matching in both English and Korean is in the data fields section.
```python
{'text':'수꼴틀딱시키들이 다 디져야 나라가 똑바로 될것같다..답이 없는 종자들ㅠ'
'label': [2, 3, 4]
}
```
### Data Fields
* `text`: utterance from Korean online news comment.
* `label`: the label numbers matching with 8 fine-grained hate speech classes and `not hate speech` class are follows.
* `0`: `Origin`(`출신차별`) hate speech based on place of origin or identity;
* `1`: `Physical`(`외모차별`) hate speech based on physical appearance (e.g. body, face) or disability;
* `2`: `Politics`(`정치성향차별`) hate speech based on political stance;
* `3`: `Profanity`(`혐오욕설`) hate speech in the form of swearing, cursing, cussing, obscene words, or expletives; or an unspecified hate speech category;
* `4`: `Age`(`연령차별`) hate speech based on age;
* `5`: `Gender`(`성차별`) hate speech based on gender or sexual orientation (e.g. woman, homosexual);
* `6`: `Race`(`인종차별`) hate speech based on ethnicity;
* `7`: `Religion`(`종교차별`) hate speech based on religion;
* `8`: `Not Hate Speech`(`해당사항없음`).
### Data Splits
In our repository, we provide splitted datasets that have 78,977(train) / 8,776 (validation) / 21,939 (test) samples, preserving the class proportion.
## Dataset Creation
### Curation Rationale
We propose K-MHaS, a large size Korean multi-label hate speech detection dataset that represents Korean language patterns effectively. Most datasets in hate speech research are annotated using a single label classification of particular aspects, even though the subjectivity of hate speech cannot be explained with a mutually exclusive annotation scheme. We propose a multi-label hate speech annotation scheme that allows overlapping labels associated with the subjectivity and the intersectionality of hate speech.
### Source Data
#### Initial Data Collection and Normalization
Our dataset is based on the Korean online news comments available on Kaggle and Github. The unlabeled raw data was collected between January 2018 and June 2020. Please see the details in our paper [K-MHaS](https://aclanthology.org/2022.coling-1.311) published at COLING2020.
#### Who are the source language producers?
The language producers are users who left the comments on the Korean online news platform between 2018 and 2020.
### Annotations
#### Annotation process
We begin with the common categories of hate speech found in literature and match the keywords for each category. After the preliminary round, we investigate the results to merge or remove labels in order to provide the most representative subtype labels of hate speech contextual to the cultural background. Our annotation instructions explain a twolayered annotation to (a) distinguish hate and not hate speech, and (b) the categories of hate speech. Annotators are requested to consider given keywords or alternatives of each category within social, cultural, and historical circumstances. For more details, please refer to the paper [K-MHaS](https://aclanthology.org/2022.coling-1.311).
#### Who are the annotators?
Five native speakers were recruited for manual annotation in both the preliminary and main rounds.
### Personal and Sensitive Information
This datasets contains examples of hateful language, however, has no personal information.
## Considerations for Using the Data
### Social Impact of Dataset
We propose K-MHaS, a new large-sized dataset for Korean hate speech detection with a multi-label annotation scheme. We provided extensive baseline experiment results, presenting the usability of a dataset to detect Korean language patterns in hate speech.
### Discussion of Biases
All annotators were recruited from a crowdsourcing platform. They were informed about hate speech before handling the data. Our instructions allowed them to feel free to leave if they were uncomfortable with the content. With respect to the potential risks, we note that the subjectivity of human annotation would impact on the quality of the dataset.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is curated by Taejun Lim, Heejun Lee and Bogeun Jo.
### Licensing Information
Creative Commons Attribution-ShareAlike 4.0 International (cc-by-sa-4.0).
### Citation Information
```
@inproceedings{lee-etal-2022-k,
title = "K-{MH}a{S}: A Multi-label Hate Speech Detection Dataset in {K}orean Online News Comment",
author = "Lee, Jean and
Lim, Taejun and
Lee, Heejun and
Jo, Bogeun and
Kim, Yangsok and
Yoon, Heegeun and
Han, Soyeon Caren",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.311",
pages = "3530--3538",
abstract = "Online hate speech detection has become an important issue due to the growth of online content, but resources in languages other than English are extremely limited. We introduce K-MHaS, a new multi-label dataset for hate speech detection that effectively handles Korean language patterns. The dataset consists of 109k utterances from news comments and provides a multi-label classification using 1 to 4 labels, and handles subjectivity and intersectionality. We evaluate strong baselines on K-MHaS. KR-BERT with a sub-character tokenizer outperforms others, recognizing decomposed characters in each hate speech class.",
}
```
### Contributions
The contributors of the work are:
- [Jean Lee](https://jeanlee-ai.github.io/) (The University of Sydney)
- [Taejun Lim](https://github.com/taezun) (The University of Sydney)
- [Heejun Lee](https://bigwaveai.com/) (BigWave AI)
- [Bogeun Jo](https://bigwaveai.com/) (BigWave AI)
- Yangsok Kim (Keimyung University)
- Heegeun Yoon (National Information Society Agency)
- [Soyeon Caren Han](https://drcarenhan.github.io/) (The University of Western Australia and The University of Sydney)
|
Jackmin108/c4-en-validation | 2023-08-18T22:00:10.000Z | [
"region:us"
] | Jackmin108 | null | null | null | 0 | 809 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: validation
num_bytes: 825766822
num_examples: 364608
download_size: 509605306
dataset_size: 825766822
---
# Dataset Card for "c4-en-validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tevatron/msmarco-passage | 2023-07-18T07:34:33.000Z | [
"region:us"
] | Tevatron | null | @misc{bajaj2018ms,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu
and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song
and Alina Stoica and Saurabh Tiwary and Tong Wang},
year={2018},
eprint={1611.09268},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 3 | 808 | Entry not found |
kumapo/JAQKET | 2023-10-09T06:44:28.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"language:ja",
"license:cc-by-sa-4.0",
"region:us"
] | kumapo | JAQKET: JApanese Questions on Knowledge of EnTitie | @InProceedings{Kurihara_nlp2020,
author = "鈴木正敏 and 鈴木潤 and 松田耕史 and ⻄田京介 and 井之上直也",
title = "JAQKET: クイズを題材にした日本語 QA データセットの構築",
booktitle = "言語処理学会第26回年次大会",
year = "2020",
url = "https://www.anlp.jp/proceedings/annual_meeting/2020/pdf_dir/P2-24.pdf"
note= "in Japanese" | null | 0 | 804 | ---
license: cc-by-sa-4.0
task_categories:
- multiple-choice
- question-answering
language:
- ja
---
# Dataset Card for JAQKET
This dataset loading script is developed on [GitHub](https://github.com/kumapo/JAQKET-dataset).
Please feel free to open an [issue](https://github.com/kumapo/JAQKET-dataset/issues) or [pull request](https://github.com/kumapo/JAQKET-dataset/pulls).
## Dataset Description
- **Homepage:** https://sites.google.com/view/project-aio/dataset
- **Repository:** https://github.com/kumapo/JAQKET-dataset
### Dataset Summary
From [the original paper](https://www.anlp.jp/proceedings/annual_meeting/2020/pdf_dir/P2-24.pdf):
> 本研究では,日本における質問応答/機械読解研究の促進を目的として,研究者が容易に利用可能な日本語のオープンドメイン QA タスクのデータセット「JAQKET」1を構築する.
> 作成するデータセットは,既存研究 [7] に倣い,Wikipedia2 の記事名を答えとした,日本語のオープンドメイン QA タスクのデータセットである.
### Supported Tasks
#### JAQKET v1.0
From [the original paper](https://www.anlp.jp/proceedings/annual_meeting/2020/pdf_dir/P2-24.pdf):
> 本研究で扱う日本語オープンドメイン QA タスクを定義する.本研究では,クイズの問題文に対して複数(数個から数十個程度)の解答の選択肢が与られ,その選択肢から正解を一つ選択するという択一問題を取り扱う.
#### JAQKET v2.0
From [the homepage](https://sites.google.com/view/project-aio/competition2):
> 問題として与えられるのはクイズの問題文のみです.その問題文から解答となる文字列を解答として返すシステムを構築してもらいます.
### Languages
The language data in JAQKET is in Japanese.
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
#### JAQKET v1.0
```python
from datasets import load_dataset
dataset = load_dataset("kumapo/JAQKET", name="v1.0")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['qid', 'question', 'answer_entity', 'label', 'answer_candidates', 'contexts'],
# num_rows: 13061
# })
# validation: Dataset({
# features: ['qid', 'question', 'answer_entity', 'label', 'answer_candidates', 'contexts'],
# num_rows: 271
# })
# })
```
An example of the JAQKET v1.0 dataset looks as follows:
```json
{
"qid": "QA20QBIK-0002",
"question": "童謡『たなばたさま』の歌詞で、「さらさら」と歌われる植物は何の葉?",
"answer_entity": "ササ",
"answer_candidates": [
"ササ",
"チシマザサ",
"クマザサ",
"アダン",
"チガヤ",
"アセビ",
"ススキ",
"ホオノキ",
"マテバシイ",
"ヤマフジ",
"ウツギ",
"タムシバ",
"ミズキ",
"アキタブキ",
"トベラ",
"クヌギ",
"ネズミモチ",
"ヒシ",
"コブシ",
"オオウバユリ"
],
"qtype": "なに〜"
}
```
```json
{
"qid": "QA20QBIK-0026",
"question": "北海道の中心に位置することから「北海道のへそ」と名乗る、ラベンダーで有名な都市はどこ?",
"answer_entity": "富良野市",
"answer_candidates": [
"富良野市",
"滝川市",
"北見市",
"芦別市",
"中富良野町",
"名寄市",
"網走市",
"美瑛町",
"南富良野町",
"岩見沢市",
"美唄市",
"上富良野町",
"倶知安町",
"小樽市",
"歌志内市",
"旭川市",
"ニセコ町",
"北斗市",
"稚内市",
"帯広市"
],
"qtype": "どこ"
}
```
#### JAQKET v2.0
```python
from datasets import load_dataset
dataset = load_dataset("kumapo/JAQKET", name="v2.0")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['qid', 'question', 'answers', 'ctxs'],
# num_rows: 2154
# })
# validation: Dataset({
# features: ['qid', 'question', 'answers', 'ctxs'],
# num_rows: 1164
# })
# })
```
An example of the JAQKET v2.0 dataset looks as follows:
```json
{
"qid": "QA20QBIK-0002",
"competition": "第1回AI王",
"timestamp": "2020/01/27",
"section": "開発データ問題 (dev1)",
"number": "2",
"original_question": "童謡『たなばたさま』の歌詞で、「さらさら」と歌われる植物は何の葉?",
"original_answer": "ササ",
"original_additional_info": "",
"question": "童謡『たなばたさま』の歌詞で、「さらさら」と歌われる植物は何の葉?",
"answers" :["ササ"]
}
```
## Additional Information
### Citation Information
```bibtex
@InProceedings{Kurihara_nlp2020,
author = "鈴木正敏 and 鈴木潤 and 松田耕史 and ⻄田京介 and 井之上直也",
title = "JAQKET: クイズを題材にした日本語 QA データセットの構築",
booktitle = "言語処理学会第26回年次大会",
year = "2020",
url = "https://www.anlp.jp/proceedings/annual_meeting/2020/pdf_dir/P2-24.pdf"
note= "in Japanese"}
```
|
nlphuji/winogavil | 2022-11-26T19:56:27.000Z | [
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"commonsense-reasoning",
"visual-reasoning",
"arxiv:2207.12576",
"region:us"
] | nlphuji | WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more. | @article{bitton2022winogavil,
title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},
author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2207.12576},
year={2022}
} | null | 0 | 803 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: winogavil
pretty_name: WinoGAViL
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- commonsense-reasoning
- visual-reasoning
task_ids: []
extra_gated_prompt: "By clicking on “Access repository” below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."
---
# Dataset Card for WinoGAViL
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Colab notebook code for Winogavil evaluation with CLIP](#colab-notebook-code-for-winogavil-evaluation-with-clip)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more.
- **Homepage:**
https://winogavil.github.io/
- **Colab**
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
- **Repository:**
https://github.com/WinoGAViL/WinoGAViL-experiments/
- **Paper:**
https://arxiv.org/abs/2207.12576
- **Leaderboard:**
https://winogavil.github.io/leaderboard
- **Point of Contact:**
winogavil@gmail.com; yonatanbitton1@gmail.com
### Supported Tasks and Leaderboards
https://winogavil.github.io/leaderboard.
https://paperswithcode.com/dataset/winogavil.
## Colab notebook code for Winogavil evaluation with CLIP
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
### Languages
English.
## Dataset Structure
### Data Fields
candidates (list): ["bison", "shelter", "beard", "flea", "cattle", "shave"] - list of image candidates.
cue (string): pogonophile - the generated cue.
associations (string): ["bison", "beard", "shave"] - the images associated with the cue selected by the user.
score_fool_the_ai (int64): 80 - the spymaster score (100 - model score) for fooling the AI, with CLIP RN50 model.
num_associations (int64): 3 - The number of images selected as associative with the cue.
num_candidates (int64): 6 - the number of total candidates.
solvers_jaccard_mean (float64): 1.0 - three solvers scores average on the generated association instance.
solvers_jaccard_std (float64): 1.0 - three solvers scores standard deviation on the generated association instance
ID (int64): 367 - association ID.
### Data Splits
There is a single TEST split. In the accompanied paper and code we sample it to create different training sets, but the intended use is to use winogavil as a test set.
There are different number of candidates, which creates different difficulty levels:
-- With 5 candidates, random model expected score is 38%.
-- With 6 candidates, random model expected score is 34%.
-- With 10 candidates, random model expected score is 24%.
-- With 12 candidates, random model expected score is 19%.
<details>
<summary>Why random chance for success with 5 candidates is 38%?</summary>
It is a binomial distribution probability calculation.
Assuming N=5 candidates, and K=2 associations, there could be three events:
(1) The probability for a random guess is correct in 0 associations is 0.3 (elaborate below), and the Jaccard index is 0 (there is no intersection between the correct labels and the wrong guesses). Therefore the expected random score is 0.
(2) The probability for a random guess is correct in 1 associations is 0.6, and the Jaccard index is 0.33 (intersection=1, union=3, one of the correct guesses, and one of the wrong guesses). Therefore the expected random score is 0.6*0.33 = 0.198.
(3) The probability for a random guess is correct in 2 associations is 0.1, and the Jaccard index is 1 (intersection=2, union=2). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=2, the expected score is 0+0.198+0.1 = 0.298.
To calculate (1), the first guess needs to be wrong. There are 3 "wrong" guesses and 5 candidates, so the probability for it is 3/5. The next guess should also be wrong. Now there are only 2 "wrong" guesses, and 4 candidates, so the probability for it is 2/4. Multiplying 3/5 * 2/4 = 0.3.
Same goes for (2) and (3).
Now we can perform the same calculation with K=3 associations.
Assuming N=5 candidates, and K=3 associations, there could be four events:
(4) The probability for a random guess is correct in 0 associations is 0, and the Jaccard index is 0. Therefore the expected random score is 0.
(5) The probability for a random guess is correct in 1 associations is 0.3, and the Jaccard index is 0.2 (intersection=1, union=4). Therefore the expected random score is 0.3*0.2 = 0.06.
(6) The probability for a random guess is correct in 2 associations is 0.6, and the Jaccard index is 0.5 (intersection=2, union=4). Therefore the expected random score is 0.6*5 = 0.3.
(7) The probability for a random guess is correct in 3 associations is 0.1, and the Jaccard index is 1 (intersection=3, union=3). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=3, the expected score is 0+0.06+0.3+0.1 = 0.46.
Taking the average of 0.298 and 0.46 we reach 0.379.
Same process can be recalculated with 6 candidates (and K=2,3,4), 10 candidates (and K=2,3,4,5) and 123 candidates (and K=2,3,4,5,6).
</details>
## Dataset Creation
Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating
associations that are challenging for a rival AI model but still solvable by other
human players.
### Annotations
#### Annotation process
We paid Amazon Mechanical Turk Workers to play our game.
## Considerations for Using the Data
All associations were obtained with human annotators.
### Licensing Information
CC-By 4.0
### Citation Information
@article{bitton2022winogavil,
title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},
author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2207.12576},
year={2022}
|
shibing624/medical | 2023-06-02T07:03:41.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"language:en",
"license:apache-2.0",
"text-generation",
"region:us"
] | shibing624 | 纯文本数据,中文医疗数据集,包含预训练数据的百科数据,指令微调数据和奖励模型数据。 | null | null | 134 | 803 | ---
license: apache-2.0
language:
- zh
- en
tags:
- text-generation
pretty_name: medical
task_categories:
- text-generation
size_categories:
- 1M<n<10M
---
# Dataset Card for medical
中文医疗数据集
- LLM Supervised Finetuning repository: https://github.com/shibing624/textgen
- MeidcalGPT repository: https://github.com/shibing624/MedicalGPT
## Dataset Description
medical is a Chinese Medical dataset. 医疗数据集,可用于医疗领域大模型训练。
```
tree medical
|-- finetune # 监督微调数据集,可用于SFT和RLHF
| |-- test_en_1.json
| |-- test_zh_0.json
| |-- train_en_1.json
| |-- train_zh_0.json
| |-- valid_en_1.json
| `-- valid_zh_0.json
|-- medical.py # hf dataset 数据展示用
|-- pretrain # 二次预训练数据集
| |-- medical_book_zh.json
| |-- test_encyclopedia.json
| |-- train_encyclopedia.json
| `-- valid_encyclopedia.json
|-- README.md
`-- reward # 奖励模型数据集
|-- test.json
|-- train.json
`-- valid.json
```
### Original Dataset Summary
#### pretrain
- train_encyclopedia.json: 共36万条,来自医疗百科数据[FreedomIntelligence/huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa) , 拼接 questions 和 answers,形成 text 文本字段,语句通顺,用于预训练注入医疗知识。
- medical_book_zh.json: 共8475条,来自医疗教材的文本数据,来源:https://github.com/jind11/MedQA, 原始数据集:[google drive](https://drive.google.com/u/0/uc?export=download&confirm=t&id=1ImYUSLk9JbgHXOemfvyiDiirluZHPeQw) ,只对长段落切分为2048字的小段落了。
#### finetune
- train_zh_0.json: 共195万条,来自1)中文医疗对话数据集[Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data)的六个科室医疗问诊数据,
有79万条;2)在线医疗百科 huatuo_encyclopedia_qa ,有36万条;3)医疗知识图谱 huatuo_knowledge_graph_qa,有79万条。三部分合并,共195万条。
- train_en_1.json:共11万条,来自英文医疗问诊对话数据[Kent0n-Li/ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor),合并了HealthCareMagic-100k、GenMedGPT-5k 数据集,共11万条。
#### reward
- train.json 共4000条,问题来自中文医疗对话数据集[Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data)的随机4000条提问,`response_chosen`来自该数据集的医生答复,
`response_rejected`来自本草模型[SCIR-HI/Huatuo-Llama-Med-Chinese](https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese)的答复。
### Supported Tasks and Leaderboards
中文医疗对话模型
The dataset designed for medical task training pretrained language models.
### Languages
The data are in Chinese.
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
head pretrain/train_encyclopedia.json
```json
{"text": "怀孕后嘴巴很淡怎么办?有孕妇在怀孕之后,发现自己嘴巴比较淡,出现这种情况的原因其实也非常的复杂,首先和妊娠反应有直接的关系,这是一种正常的情况,另外有些孕妇平常不注意口腔卫生,舌苔比较厚或者自己有了一些消化系统方面的疾病,这就要求人们必须要及时的进行处理。女性在怀孕之后,身体就会出现一些明显的变化,首先人们月经会停止,另外也会有恶心、呕吐等一些妊娠反应,不过这些都是正常的。有些孕妇发现自己在怀孕之后,口味发生了很大的变化,嘴巴变得非常的淡。其实这也和激素变化有直接的关系,可能是妊娠反应所致,在怀孕期间,因为受到体内激素水平的变化,所以就会有肠胃系统的改变,人们可能会出现食欲不振,消化不良等症状表现,也有一些孕妇会发现自己嘴巴没有味道,会有口苦的症状,而这也是正常的孕期反应,人们在平常要多喝一些水,多吃一些清淡营养的食物慢慢就会改善。也有可能是舌苔过厚所致,孕妇嘴巴里面没有味道,很有可能是舌苔比较重、舌苔过厚导致的,这样就会影响到味蕾对味道的敏感度,不仅嘴巴里面没有味道,甚至有些孕妇在说话的时候也会有口臭,这就要求人们在每天早上漱口的时候,必须要用牙刷刷舌苔开始,不要清理的特别深,以免人们会有呕吐,慢慢习惯之后再往深一点的清洗,一般2到3天就会得到改善。嘴巴感到比较淡,其实也和脾胃虚寒有直接的关系,消化系统疾病,内分泌疾病,营养不良等,但有可能导致舌头部位因为味蕾的敏感度下降,产生口淡之感,患者会有食欲不振的表现,发现病症及时就诊治疗。"}
```
head finetune/train_zh_0.json
```json
{"instruction": "曲匹地尔片的用法用量", "input": "", "output": "注意:同种药品可由于不同的包装规格有不同的用法或用量。本文只供参考。如果不确定,请参看药品随带的说明书或向医生询问。口服。一次50~100mg(1-2片),3次/日,或遵医嘱。"}
```
head finetune/train_en_1.json
```json
{"instruction": "If you are a doctor, please answer the medical questions based on the patient's description.", "input": "Doctor, I have been experiencing sudden and frequent panic attacks. I don't know what to do.", "output": "Well, based on what you're telling me, it sounds like you may be suffering from panic disorder. The best course of action is to start with psychotherapy and mental health counseling. Additionally, we should conduct an electrocardiogram to make sure that there are no physical issues causing your panic attacks. We will also need to perform a depression screen and a toxicology screen to rule out any other underlying causes. Finally, I would recommend a comprehensive psychological and psychiatric evaluation and therapy to help manage your symptoms."}
```
head reward/train.json
```json
{"question": "孩子感统失调,学习不行怎么办?,孩子说话晚,走路晚,现在小学二年级,学习跟不上,理解力差,运动协调性差,家里很着急,不知怎么办。", "response_chosen": "病情分析:你好!孩子说话晚,走路也晚,很可能是大脑本身发育不好引起的发育迟缓。而五岁时所致的智力检查为临界范围,那就是说孩子的智商是有问题的,也应考虑与大脑发育不好有关。指导意见:人的大脑在头一年发育最快,可塑性最强,在头三年可塑性还是可以的,超过三岁再进行训练,效果就不怎么好了。建议再给孩子做一做智力测试,如果孩子的智商还是在临界范围,那就要考虑让孩子去特殊学校进行康复训练,而不是继续在普通小学就读,否则对孩子来说,就是强人所难了。希望自己的孩子能聪明,这是每个家长都会有的心愿,但如果孩子自身的条件就是不能跟上同龄孩子,那家长也要面对这个事实的,对吗?医生询问:", "response_rejected": "建议家长先带孩子去正规医院做全面检查以确定病因和病情严重程度;同时可以进行物理治疗、康复训练等辅助治疗方法。"}
```
### Data Fields
#### 预训练数据集 pretrain
字段解释:
- text: 文本
#### 指令微调数据集 finetune
字段解释:
- instruction: 指令
- input:问题(可为空)
- output:答复
#### 奖励模型数据集 reward
字段解释:
- question: 问题
- response_chosen: 优质回答
- response_rejected: 低质回答
### Data Splits
```
> wc -l medical/*/*
500 medical/finetune/test_en_1.json
500 medical/finetune/test_zh_0.json
116617 medical/finetune/train_en_1.json
1949972 medical/finetune/train_zh_0.json
500 medical/finetune/valid_en_1.json
500 medical/finetune/valid_zh_0.json
8475 medical/pretrain/medical_book_zh.json
500 medical/pretrain/test_encyclopedia.json
361420 medical/pretrain/train_encyclopedia.json
500 medical/pretrain/valid_encyclopedia.json
100 medical/reward/test.json
3800 medical/reward/train.json
100 medical/reward/valid.json
2443484 total
```
### Licensing Information
The dataset is available under the Apache 2.0.
### Citation Information
- https://github.com/Toyhom/Chinese-medical-dialogue-data
- https://github.com/FreedomIntelligence/Huatuo-26M/blob/main/README_zh-CN.md
- https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa
- https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa
- https://github.com/Kent0n-Li/ChatDoctor
附上几个优质的reward model dataset:
- https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise
- https://huggingface.co/datasets/sunzeyeah/chinese_chatgpt_corpus
- https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12
- https://huggingface.co/datasets/Dahoas/rm-static
### Contributions
[shibing624](https://github.com/shibing624) 整理并上传
|
DFKI-SLT/few-nerd | 2023-06-21T09:59:09.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-sa-4.0",
"structure-prediction",
"region:us"
] | DFKI-SLT | Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset,
which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities
and 4,601,223 tokens. Three benchmark tasks are built, one is supervised: Few-NERD (SUP) and the
other two are few-shot: Few-NERD (INTRA) and Few-NERD (INTER). | @inproceedings{ding2021few,
title={Few-NERD: A Few-Shot Named Entity Recognition Dataset},
author={Ding, Ning and Xu, Guangwei and Chen, Yulin, and Wang, Xiaobin and Han, Xu and Xie,
Pengjun and Zheng, Hai-Tao and Liu, Zhiyuan},
booktitle={ACL-IJCNLP},
year={2021}
} | null | 12 | 802 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: few-nerd
pretty_name: Few-NERD
tags:
- structure-prediction
---
# Dataset Card for "Few-NERD"
## Table of Contents
- [Dataset Description](
#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://ningding97.github.io/fewnerd/](https://ningding97.github.io/fewnerd/)
- **Repository:** [https://github.com/thunlp/Few-NERD](https://github.com/thunlp/Few-NERD)
- **Paper:** [https://aclanthology.org/2021.acl-long.248/](https://aclanthology.org/2021.acl-long.248/)
- **Point of Contact:** See [https://ningding97.github.io/fewnerd/](https://ningding97.github.io/fewnerd/)
### Dataset Summary
This script is for loading the Few-NERD dataset from https://ningding97.github.io/fewnerd/.
Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset, which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities, and 4,601,223 tokens. Three benchmark tasks are built, one is supervised (Few-NERD (SUP)) and the other two are few-shot (Few-NERD (INTRA) and Few-NERD (INTER)).
NER tags use the `IO` tagging scheme. The original data uses a 2-column CoNLL-style format, with empty lines to separate sentences. DOCSTART information is not provided since the sentences are randomly ordered.
For more details see https://ningding97.github.io/fewnerd/ and https://aclanthology.org/2021.acl-long.248/.
### Supported Tasks and Leaderboards
- **Tasks:** Named Entity Recognition, Few-shot NER
- **Leaderboards:**
- https://ningding97.github.io/fewnerd/
- named-entity-recognition:https://paperswithcode.com/sota/named-entity-recognition-on-few-nerd-sup
- other-few-shot-ner:https://paperswithcode.com/sota/few-shot-ner-on-few-nerd-intra
- other-few-shot-ner:https://paperswithcode.com/sota/few-shot-ner-on-few-nerd-inter
### Languages
English
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:**
- `super`: 14.6 MB
- `intra`: 11.4 MB
- `inter`: 11.5 MB
- **Size of the generated dataset:**
- `super`: 116.9 MB
- `intra`: 106.2 MB
- `inter`: 106.2 MB
- **Total amount of disk used:** 366.8 MB
An example of 'train' looks as follows.
```json
{
'id': '1',
'tokens': ['It', 'starred', 'Hicks', "'s", 'wife', ',', 'Ellaline', 'Terriss', 'and', 'Edmund', 'Payne', '.'],
'ner_tags': [0, 0, 7, 0, 0, 0, 7, 7, 0, 7, 7, 0],
'fine_ner_tags': [0, 0, 51, 0, 0, 0, 50, 50, 0, 50, 50, 0]
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `art` (1), `building` (2), `event` (3), `location` (4), `organization` (5), `other`(6), `person` (7), `product` (8)
- `fine_ner_tags`: a `list` of fine-grained classification labels, with possible values including `O` (0), `art-broadcastprogram` (1), `art-film` (2), ...
### Data Splits
| Task | Train | Dev | Test |
| ----- | ------ | ----- | ---- |
| SUP | 131767 | 18824 | 37648 |
| INTRA | 99519 | 19358 | 44059 |
| INTER | 130112 | 18817 | 14007 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@inproceedings{ding-etal-2021-nerd,
title = "Few-{NERD}: A Few-shot Named Entity Recognition Dataset",
author = "Ding, Ning and
Xu, Guangwei and
Chen, Yulin and
Wang, Xiaobin and
Han, Xu and
Xie, Pengjun and
Zheng, Haitao and
Liu, Zhiyuan",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.248",
doi = "10.18653/v1/2021.acl-long.248",
pages = "3198--3213",
}
```
### Contributions |
result-kand2-sdxl-wuerst-karlo/46328984 | 2023-09-14T18:58:10.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 795 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 209
num_examples: 10
download_size: 1390
dataset_size: 209
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "46328984"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
edarchimbaud/perimeter-stocks | 2023-10-10T15:00:20.000Z | [
"region:us"
] | edarchimbaud | null | null | null | 0 | 793 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: security
dtype: string
- name: gics_sector
dtype: string
- name: gics_sub_industry
dtype: string
splits:
- name: train
num_bytes: 112249
num_examples: 1500
download_size: 43983
dataset_size: 112249
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "perimeter-stocks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sms_spam | 2023-01-25T14:44:29.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-nus-sms-corpus",
"language:en",
"license:unknown",
"region:us"
] | null | The SMS Spam Collection v.1 is a public set of SMS labeled messages that have been collected for mobile phone spam research.
It has one collection composed by 5,574 English, real and non-enconded messages, tagged according being legitimate (ham) or spam. | @inproceedings{Almeida2011SpamFiltering,
title={Contributions to the Study of SMS Spam Filtering: New Collection and Results},
author={Tiago A. Almeida and Jose Maria Gomez Hidalgo and Akebo Yamakami},
year={2011},
booktitle = "Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11)",
} | null | 12 | 788 | ---
annotations_creators:
- crowdsourced
- found
language_creators:
- crowdsourced
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-nus-sms-corpus
task_categories:
- text-classification
task_ids:
- intent-classification
paperswithcode_id: sms-spam-collection-data-set
pretty_name: SMS Spam Collection Data Set
dataset_info:
features:
- name: sms
dtype: string
- name: label
dtype:
class_label:
names:
'0': ham
'1': spam
config_name: plain_text
splits:
- name: train
num_bytes: 521756
num_examples: 5574
download_size: 203415
dataset_size: 521756
train-eval-index:
- config: plain_text
task: text-classification
task_id: binary_classification
splits:
train_split: train
col_mapping:
sms: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection
- **Repository:**
- **Paper:** Almeida, T.A., Gomez Hidalgo, J.M., Yamakami, A. Contributions to the study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (ACM DOCENG'11), Mountain View, CA, USA, 2011.
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The SMS Spam Collection v.1 is a public set of SMS labeled messages that have been collected for mobile phone spam research.
It has one collection composed by 5,574 English, real and non-enconded messages, tagged according being legitimate (ham) or spam.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- sms: the sms message
- label: indicating if the sms message is ham or spam, ham means it is not spam
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@inproceedings{Almeida2011SpamFiltering,
title={Contributions to the Study of SMS Spam Filtering: New Collection and Results},
author={Tiago A. Almeida and Jose Maria Gomez Hidalgo and Akebo Yamakami},
year={2011},
booktitle = "Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11)",
}
### Contributions
Thanks to [@czabo](https://github.com/czabo) for adding this dataset. |
liwu/MNBVC | 2023-10-09T01:24:55.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:zh",
"license:mit",
"region:us"
] | liwu | MNBVC: Massive Never-ending BT Vast Chinese corpus | \ | null | 256 | 788 | ---
annotations_creators:
- other
language:
- zh
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: MNBVC
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for MNBVC
## Table of Contents
- [Dataset Card for MNBVC](#dataset-card-for-mnbvc)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [数据集介绍](#数据集介绍)
- [数据子集](#数据子集)
- [数据格式](#数据格式)
- [文本数据](#文本数据)
- [问答数据](#问答数据)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://mnbvc.253874.net/
- **Repository:** https://github.com/esbatmop/MNBVC
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### 数据集介绍
中文互联网上最古老最神秘(没有之一)的里屋社区于2023.1.1庄重宣布:
在英明神武的里屋管子带领下,决心发挥社区所长(哪都长),帮助开源社区长期更新一份最大的中文互联网语料集。
Huggingface上的MNBVC数据集在逐渐更新中,请到[https://github.com/esbatmop/MNBVC](https://github.com/esbatmop/MNBVC) 获取未完成清洗的更多数据。
可以使用如下脚本加载:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'law_judgement', split='train', streaming=True)
next(iter(dataset)) # get the first line
```
## 数据子集
MNBVC数据集包含数个子集:
- `law_judgement`: 来自法律文书的文本。
- `gov_xuexiqiangguo`: 来自学习强国的文本。
- `gov_report`: 来自政府工作报告的文本。
- `co_ann_report`: 企业年报文本。
- `code_metadata`: 代码元数据。
- `qa_zhihu`: 来自知乎的问答数据。
- `qa_wikihow`: 来自wikihow的问答数据。
- `qa_mfa`: 外交部问答数据。
- `news_peoples_daily`: 来自人民日报的文本数据。
- `wikipedia`: 来自维基百科的文本数据。
## 数据格式
目前MNBVC数据集包含如下[几类数据](https://wiki.mnbvc.org/doku.php/%E7%8E%B0%E6%9C%89%E8%AF%AD%E6%96%99%E6%A0%BC%E5%BC%8F):
### 文本数据
文本数据使用如下格式组织:
```json
{
"文件名": datasets.Value("string"),
"是否待查文件": datasets.Value("bool"),
"是否重复文件": datasets.Value("bool"),
"文件大小": datasets.Value("int32"),
"simhash": datasets.Value("uint64"),
"最长段落长度": datasets.Value("int32"),
"段落数": datasets.Value("int32"),
"去重段落数": datasets.Value("int32"),
"低质量段落数": datasets.Value("int32"),
"段落": [
datasets.Features(
{
"行号": datasets.Value("int32"),
"是否重复": datasets.Value("bool"),
"是否跨文件重复": datasets.Value("bool"),
"md5": datasets.Value("string"),
"内容": datasets.Value("string"),
}
)
]
}
```
### 问答数据
问答数据使用如下格式组织:
```json
{
"id": datasets.Value("int32"),
"问": datasets.Value("string"),
"答": datasets.Value("string"),
"来源": datasets.Value("string"),
"元数据": {
"create_time": datasets.Value("string"),
"问题明细": datasets.Value("string"),
"回答明细": datasets.Value("string"),
"扩展字段": datasets.Value("string"),
}
}
```
项目早期所上传的数据使用如下格式,以后这一格式会被废弃,相应数据也会重新上传:
```json
{
"text": datasets.Value("string"),
"meta": datasets.Value("string")
}
```
### Contributions
Thanks to the [Liwu community](http://mnbvc.253874.net/) for constructing this dataset.
Thanks to [silver](https://github.com/silverriver) for adding this dataset. |
Francesco/animals-ij5d2 | 2023-03-30T09:30:09.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 4 | 787 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': animals
'1': cat
'2': chicken
'3': cow
'4': dog
'5': fox
'6': goat
'7': horse
'8': person
'9': racoon
'10': skunk
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: animals-ij5d2
tags:
- rf100
---
# Dataset Card for animals-ij5d2
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/animals-ij5d2
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
animals-ij5d2
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/animals-ij5d2
### Citation Information
```
@misc{ animals-ij5d2,
title = { animals ij5d2 Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/animals-ij5d2 } },
url = { https://universe.roboflow.com/object-detection/animals-ij5d2 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
code_x_glue_ct_code_to_text | 2023-06-01T14:59:54.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:other-programming-languages",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:code",
"language:en",
"license:c-uda",
"code-to-text",
"region:us"
] | null | The dataset we use comes from CodeSearchNet and we filter the dataset as the following:
- Remove examples that codes cannot be parsed into an abstract syntax tree.
- Remove examples that #tokens of documents is < 3 or >256
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
- Remove examples that documents are not English. | @article{husain2019codesearchnet,
title={Codesearchnet challenge: Evaluating the state of semantic code search},
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
journal={arXiv preprint arXiv:1909.09436},
year={2019}
} | null | 35 | 785 | ---
annotations_creators:
- found
language_creators:
- found
language:
- code
- en
license:
- c-uda
multilinguality:
- other-programming-languages
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: CodeXGlueCtCodeToText
tags:
- code-to-text
dataset_info:
- config_name: go
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 342244027
num_examples: 167288
- name: validation
num_bytes: 13721912
num_examples: 7325
- name: test
num_bytes: 16328458
num_examples: 8122
download_size: 499922799
dataset_size: 372294397
- config_name: java
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 452554719
num_examples: 164923
- name: validation
num_bytes: 13366396
num_examples: 5183
- name: test
num_bytes: 29080857
num_examples: 10955
download_size: 1072966017
dataset_size: 495001972
- config_name: javascript
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 160860743
num_examples: 58025
- name: validation
num_bytes: 10337396
num_examples: 3885
- name: test
num_bytes: 10190765
num_examples: 3291
download_size: 1677110214
dataset_size: 181388904
- config_name: php
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 614655799
num_examples: 241241
- name: validation
num_bytes: 33283149
num_examples: 12982
- name: test
num_bytes: 35375097
num_examples: 14014
download_size: 864290912
dataset_size: 683314045
- config_name: python
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 813664500
num_examples: 251820
- name: validation
num_bytes: 46888668
num_examples: 13914
- name: test
num_bytes: 50659792
num_examples: 14918
download_size: 953306861
dataset_size: 911212960
- config_name: ruby
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 51956595
num_examples: 24927
- name: validation
num_bytes: 2821089
num_examples: 1400
- name: test
num_bytes: 2671603
num_examples: 1261
download_size: 124154892
dataset_size: 57449287
config_names:
- go
- java
- javascript
- php
- python
- ruby
---
# Dataset Card for "code_x_glue_ct_code_to_text"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text
### Dataset Summary
CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text
The dataset we use comes from CodeSearchNet and we filter the dataset as the following:
- Remove examples that codes cannot be parsed into an abstract syntax tree.
- Remove examples that #tokens of documents is < 3 or >256
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
- Remove examples that documents are not English.
### Supported Tasks and Leaderboards
- `machine-translation`: The dataset can be used to train a model for automatically generating **English** docstrings for code.
### Languages
- Go **programming** language
- Java **programming** language
- Javascript **programming** language
- PHP **programming** language
- Python **programming** language
- Ruby **programming** language
- English **natural** language
## Dataset Structure
### Data Instances
#### go
An example of 'test' looks as follows.
```
{
"code": "func NewSTM(c *v3.Client, apply func(STM) error, so ...stmOption) (*v3.TxnResponse, error) {\n\topts := &stmOptions{ctx: c.Ctx()}\n\tfor _, f := range so {\n\t\tf(opts)\n\t}\n\tif len(opts.prefetch) != 0 {\n\t\tf := apply\n\t\tapply = func(s STM) error {\n\t\t\ts.Get(opts.prefetch...)\n\t\t\treturn f(s)\n\t\t}\n\t}\n\treturn runSTM(mkSTM(c, opts), apply)\n}",
"code_tokens": ["func", "NewSTM", "(", "c", "*", "v3", ".", "Client", ",", "apply", "func", "(", "STM", ")", "error", ",", "so", "...", "stmOption", ")", "(", "*", "v3", ".", "TxnResponse", ",", "error", ")", "{", "opts", ":=", "&", "stmOptions", "{", "ctx", ":", "c", ".", "Ctx", "(", ")", "}", "\n", "for", "_", ",", "f", ":=", "range", "so", "{", "f", "(", "opts", ")", "\n", "}", "\n", "if", "len", "(", "opts", ".", "prefetch", ")", "!=", "0", "{", "f", ":=", "apply", "\n", "apply", "=", "func", "(", "s", "STM", ")", "error", "{", "s", ".", "Get", "(", "opts", ".", "prefetch", "...", ")", "\n", "return", "f", "(", "s", ")", "\n", "}", "\n", "}", "\n", "return", "runSTM", "(", "mkSTM", "(", "c", ",", "opts", ")", ",", "apply", ")", "\n", "}"],
"docstring": "// NewSTM initiates a new STM instance, using serializable snapshot isolation by default.",
"docstring_tokens": ["NewSTM", "initiates", "a", "new", "STM", "instance", "using", "serializable", "snapshot", "isolation", "by", "default", "."],
"func_name": "NewSTM",
"id": 0,
"language": "go",
"original_string": "func NewSTM(c *v3.Client, apply func(STM) error, so ...stmOption) (*v3.TxnResponse, error) {\n\topts := &stmOptions{ctx: c.Ctx()}\n\tfor _, f := range so {\n\t\tf(opts)\n\t}\n\tif len(opts.prefetch) != 0 {\n\t\tf := apply\n\t\tapply = func(s STM) error {\n\t\t\ts.Get(opts.prefetch...)\n\t\t\treturn f(s)\n\t\t}\n\t}\n\treturn runSTM(mkSTM(c, opts), apply)\n}",
"path": "clientv3/concurrency/stm.go",
"repo": "etcd-io/etcd",
"sha": "616592d9ba993e3fe9798eef581316016df98906",
"url": "https://github.com/etcd-io/etcd/blob/616592d9ba993e3fe9798eef581316016df98906/clientv3/concurrency/stm.go#L89-L102"
}
```
#### java
An example of 'test' looks as follows.
```
{
"code": "protected final void fastPathOrderedEmit(U value, boolean delayError, Disposable disposable) {\n final Observer<? super V> observer = downstream;\n final SimplePlainQueue<U> q = queue;\n\n if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n if (q.isEmpty()) {\n accept(observer, value);\n if (leave(-1) == 0) {\n return;\n }\n } else {\n q.offer(value);\n }\n } else {\n q.offer(value);\n if (!enter()) {\n return;\n }\n }\n QueueDrainHelper.drainLoop(q, observer, delayError, disposable, this);\n }",
"code_tokens": ["protected", "final", "void", "fastPathOrderedEmit", "(", "U", "value", ",", "boolean", "delayError", ",", "Disposable", "disposable", ")", "{", "final", "Observer", "<", "?", "super", "V", ">", "observer", "=", "downstream", ";", "final", "SimplePlainQueue", "<", "U", ">", "q", "=", "queue", ";", "if", "(", "wip", ".", "get", "(", ")", "==", "0", "&&", "wip", ".", "compareAndSet", "(", "0", ",", "1", ")", ")", "{", "if", "(", "q", ".", "isEmpty", "(", ")", ")", "{", "accept", "(", "observer", ",", "value", ")", ";", "if", "(", "leave", "(", "-", "1", ")", "==", "0", ")", "{", "return", ";", "}", "}", "else", "{", "q", ".", "offer", "(", "value", ")", ";", "}", "}", "else", "{", "q", ".", "offer", "(", "value", ")", ";", "if", "(", "!", "enter", "(", ")", ")", "{", "return", ";", "}", "}", "QueueDrainHelper", ".", "drainLoop", "(", "q", ",", "observer", ",", "delayError", ",", "disposable", ",", "this", ")", ";", "}"],
"docstring": "Makes sure the fast-path emits in order.\n@param value the value to emit or queue up\n@param delayError if true, errors are delayed until the source has terminated\n@param disposable the resource to dispose if the drain terminates",
"docstring_tokens": ["Makes", "sure", "the", "fast", "-", "path", "emits", "in", "order", "."],
"func_name": "QueueDrainObserver.fastPathOrderedEmit",
"id": 0,
"language": "java",
"original_string": "protected final void fastPathOrderedEmit(U value, boolean delayError, Disposable disposable) {\n final Observer<? super V> observer = downstream;\n final SimplePlainQueue<U> q = queue;\n\n if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n if (q.isEmpty()) {\n accept(observer, value);\n if (leave(-1) == 0) {\n return;\n }\n } else {\n q.offer(value);\n }\n } else {\n q.offer(value);\n if (!enter()) {\n return;\n }\n }\n QueueDrainHelper.drainLoop(q, observer, delayError, disposable, this);\n }",
"path": "src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java",
"repo": "ReactiveX/RxJava",
"sha": "ac84182aa2bd866b53e01c8e3fe99683b882c60e",
"url": "https://github.com/ReactiveX/RxJava/blob/ac84182aa2bd866b53e01c8e3fe99683b882c60e/src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java#L88-L108"
}
```
#### javascript
An example of 'test' looks as follows.
```
{
"code": "function createInstance(defaultConfig) {\n var context = new Axios(defaultConfig);\n var instance = bind(Axios.prototype.request, context);\n\n // Copy axios.prototype to instance\n utils.extend(instance, Axios.prototype, context);\n\n // Copy context to instance\n utils.extend(instance, context);\n\n return instance;\n}",
"code_tokens": ["function", "createInstance", "(", "defaultConfig", ")", "{", "var", "context", "=", "new", "Axios", "(", "defaultConfig", ")", ";", "var", "instance", "=", "bind", "(", "Axios", ".", "prototype", ".", "request", ",", "context", ")", ";", "// Copy axios.prototype to instance", "utils", ".", "extend", "(", "instance", ",", "Axios", ".", "prototype", ",", "context", ")", ";", "// Copy context to instance", "utils", ".", "extend", "(", "instance", ",", "context", ")", ";", "return", "instance", ";", "}"],
"docstring": "Create an instance of Axios\n\n@param {Object} defaultConfig The default config for the instance\n@return {Axios} A new instance of Axios",
"docstring_tokens": ["Create", "an", "instance", "of", "Axios"],
"func_name": "createInstance",
"id": 0,
"language": "javascript",
"original_string": "function createInstance(defaultConfig) {\n var context = new Axios(defaultConfig);\n var instance = bind(Axios.prototype.request, context);\n\n // Copy axios.prototype to instance\n utils.extend(instance, Axios.prototype, context);\n\n // Copy context to instance\n utils.extend(instance, context);\n\n return instance;\n}",
"path": "lib/axios.js",
"repo": "axios/axios",
"sha": "92d231387fe2092f8736bc1746d4caa766b675f5",
"url": "https://github.com/axios/axios/blob/92d231387fe2092f8736bc1746d4caa766b675f5/lib/axios.js#L15-L26"
}
```
#### php
An example of 'train' looks as follows.
```
{
"code": "public static function build($serviceAddress, $restConfigPath, array $config = [])\n {\n $config += [\n 'httpHandler' => null,\n ];\n list($baseUri, $port) = self::normalizeServiceAddress($serviceAddress);\n $requestBuilder = new RequestBuilder(\"$baseUri:$port\", $restConfigPath);\n $httpHandler = $config['httpHandler'] ?: self::buildHttpHandlerAsync();\n return new RestTransport($requestBuilder, $httpHandler);\n }",
"code_tokens": ["public", "static", "function", "build", "(", "$", "serviceAddress", ",", "$", "restConfigPath", ",", "array", "$", "config", "=", "[", "]", ")", "{", "$", "config", "+=", "[", "'httpHandler'", "=>", "null", ",", "]", ";", "list", "(", "$", "baseUri", ",", "$", "port", ")", "=", "self", "::", "normalizeServiceAddress", "(", "$", "serviceAddress", ")", ";", "$", "requestBuilder", "=", "new", "RequestBuilder", "(", "\"$baseUri:$port\"", ",", "$", "restConfigPath", ")", ";", "$", "httpHandler", "=", "$", "config", "[", "'httpHandler'", "]", "?", ":", "self", "::", "buildHttpHandlerAsync", "(", ")", ";", "return", "new", "RestTransport", "(", "$", "requestBuilder", ",", "$", "httpHandler", ")", ";", "}"],
"docstring": "Builds a RestTransport.\n\n@param string $serviceAddress\nThe address of the API remote host, for example \"example.googleapis.com\".\n@param string $restConfigPath\nPath to rest config file.\n@param array $config {\nConfig options used to construct the gRPC transport.\n\n@type callable $httpHandler A handler used to deliver PSR-7 requests.\n}\n@return RestTransport\n@throws ValidationException",
"docstring_tokens": ["Builds", "a", "RestTransport", "."],
"func_name": "RestTransport.build",
"id": 0,
"language": "php",
"original_string": "public static function build($serviceAddress, $restConfigPath, array $config = [])\n {\n $config += [\n 'httpHandler' => null,\n ];\n list($baseUri, $port) = self::normalizeServiceAddress($serviceAddress);\n $requestBuilder = new RequestBuilder(\"$baseUri:$port\", $restConfigPath);\n $httpHandler = $config['httpHandler'] ?: self::buildHttpHandlerAsync();\n return new RestTransport($requestBuilder, $httpHandler);\n }",
"path": "src/Transport/RestTransport.php",
"repo": "googleapis/gax-php",
"sha": "48387fb818c6882296710a2302a0aa973b99afb2",
"url": "https://github.com/googleapis/gax-php/blob/48387fb818c6882296710a2302a0aa973b99afb2/src/Transport/RestTransport.php#L85-L94"
}
```
#### python
An example of 'validation' looks as follows.
```
{
"code": "def save_act(self, path=None):\n \"\"\"Save model to a pickle located at `path`\"\"\"\n if path is None:\n path = os.path.join(logger.get_dir(), \"model.pkl\")\n\n with tempfile.TemporaryDirectory() as td:\n save_variables(os.path.join(td, \"model\"))\n arc_name = os.path.join(td, \"packed.zip\")\n with zipfile.ZipFile(arc_name, 'w') as zipf:\n for root, dirs, files in os.walk(td):\n for fname in files:\n file_path = os.path.join(root, fname)\n if file_path != arc_name:\n zipf.write(file_path, os.path.relpath(file_path, td))\n with open(arc_name, \"rb\") as f:\n model_data = f.read()\n with open(path, \"wb\") as f:\n cloudpickle.dump((model_data, self._act_params), f)",
"code_tokens": ["def", "save_act", "(", "self", ",", "path", "=", "None", ")", ":", "if", "path", "is", "None", ":", "path", "=", "os", ".", "path", ".", "join", "(", "logger", ".", "get_dir", "(", ")", ",", "\"model.pkl\"", ")", "with", "tempfile", ".", "TemporaryDirectory", "(", ")", "as", "td", ":", "save_variables", "(", "os", ".", "path", ".", "join", "(", "td", ",", "\"model\"", ")", ")", "arc_name", "=", "os", ".", "path", ".", "join", "(", "td", ",", "\"packed.zip\"", ")", "with", "zipfile", ".", "ZipFile", "(", "arc_name", ",", "'w'", ")", "as", "zipf", ":", "for", "root", ",", "dirs", ",", "files", "in", "os", ".", "walk", "(", "td", ")", ":", "for", "fname", "in", "files", ":", "file_path", "=", "os", ".", "path", ".", "join", "(", "root", ",", "fname", ")", "if", "file_path", "!=", "arc_name", ":", "zipf", ".", "write", "(", "file_path", ",", "os", ".", "path", ".", "relpath", "(", "file_path", ",", "td", ")", ")", "with", "open", "(", "arc_name", ",", "\"rb\"", ")", "as", "f", ":", "model_data", "=", "f", ".", "read", "(", ")", "with", "open", "(", "path", ",", "\"wb\"", ")", "as", "f", ":", "cloudpickle", ".", "dump", "(", "(", "model_data", ",", "self", ".", "_act_params", ")", ",", "f", ")"],
"docstring": "Save model to a pickle located at `path`",
"docstring_tokens": ["Save", "model", "to", "a", "pickle", "located", "at", "path"],
"func_name": "ActWrapper.save_act",
"id": 0,
"language": "python",
"original_string": "def save_act(self, path=None):\n \"\"\"Save model to a pickle located at `path`\"\"\"\n if path is None:\n path = os.path.join(logger.get_dir(), \"model.pkl\")\n\n with tempfile.TemporaryDirectory() as td:\n save_variables(os.path.join(td, \"model\"))\n arc_name = os.path.join(td, \"packed.zip\")\n with zipfile.ZipFile(arc_name, 'w') as zipf:\n for root, dirs, files in os.walk(td):\n for fname in files:\n file_path = os.path.join(root, fname)\n if file_path != arc_name:\n zipf.write(file_path, os.path.relpath(file_path, td))\n with open(arc_name, \"rb\") as f:\n model_data = f.read()\n with open(path, \"wb\") as f:\n cloudpickle.dump((model_data, self._act_params), f)",
"path": "baselines/deepq/deepq.py",
"repo": "openai/baselines",
"sha": "3301089b48c42b87b396e246ea3f56fa4bfc9678",
"url": "https://github.com/openai/baselines/blob/3301089b48c42b87b396e246ea3f56fa4bfc9678/baselines/deepq/deepq.py#L55-L72"
}
```
#### ruby
An example of 'train' looks as follows.
```
{
"code": "def render_body(context, options)\n if options.key?(:partial)\n [render_partial(context, options)]\n else\n StreamingTemplateRenderer.new(@lookup_context).render(context, options)\n end\n end",
"code_tokens": ["def", "render_body", "(", "context", ",", "options", ")", "if", "options", ".", "key?", "(", ":partial", ")", "[", "render_partial", "(", "context", ",", "options", ")", "]", "else", "StreamingTemplateRenderer", ".", "new", "(", "@lookup_context", ")", ".", "render", "(", "context", ",", "options", ")", "end", "end"],
"docstring": "Render but returns a valid Rack body. If fibers are defined, we return\n a streaming body that renders the template piece by piece.\n\n Note that partials are not supported to be rendered with streaming,\n so in such cases, we just wrap them in an array.",
"docstring_tokens": ["Render", "but", "returns", "a", "valid", "Rack", "body", ".", "If", "fibers", "are", "defined", "we", "return", "a", "streaming", "body", "that", "renders", "the", "template", "piece", "by", "piece", "."],
"func_name": "ActionView.Renderer.render_body",
"id": 0,
"language": "ruby",
"original_string": "def render_body(context, options)\n if options.key?(:partial)\n [render_partial(context, options)]\n else\n StreamingTemplateRenderer.new(@lookup_context).render(context, options)\n end\n end",
"path": "actionview/lib/action_view/renderer/renderer.rb",
"repo": "rails/rails",
"sha": "85a8bc644be69908f05740a5886ec19cd3679df5",
"url": "https://github.com/rails/rails/blob/85a8bc644be69908f05740a5886ec19cd3679df5/actionview/lib/action_view/renderer/renderer.rb#L38-L44"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### go, java, javascript, php, python, ruby
| field name | type | description |
|----------------|----------------|-----------------------------------------------------------------------------------|
|id |int32 | Index of the sample |
|repo |string | repo: the owner/repo |
|path |string | path: the full path to the original file |
|func_name |string | func_name: the function or method name |
|original_string |string | original_string: the raw string before tokenization or parsing |
|language |string | language: the programming language name |
|code |string | code/function: the part of the original_string that is code |
|code_tokens |Sequence[string]| code_tokens/function_tokens: tokenized version of code |
|docstring |string | docstring: the top-level comment or docstring, if it exists in the original string|
|docstring_tokens|Sequence[string]| docstring_tokens: tokenized version of docstring |
|sha |string | sha of the file |
|url |string | url of the file |
### Data Splits
| name |train |validation|test |
|----------|-----:|---------:|----:|
|go |167288| 7325| 8122|
|java |164923| 5183|10955|
|javascript| 58025| 3885| 3291|
|php |241241| 12982|14014|
|python |251820| 13914|14918|
|ruby | 24927| 1400| 1261|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Data from CodeSearchNet Challenge dataset.
[More Information Needed]
#### Who are the source language producers?
Software Engineering developers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@article{husain2019codesearchnet,
title={Codesearchnet challenge: Evaluating the state of semantic code search},
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
journal={arXiv preprint arXiv:1909.09436},
year={2019}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
cmrc2018 | 2023-04-05T09:42:31.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:zh",
"license:cc-by-sa-4.0",
"region:us"
] | null | A Span-Extraction dataset for Chinese machine reading comprehension to add language
diversities in this area. The dataset is composed by near 20,000 real questions annotated
on Wikipedia paragraphs by human experts. We also annotated a challenge set which
contains the questions that need comprehensive understanding and multi-sentence
inference throughout the context. | @inproceedings{cui-emnlp2019-cmrc2018,
title = {A Span-Extraction Dataset for {C}hinese Machine Reading Comprehension},
author = {Cui, Yiming and
Liu, Ting and
Che, Wanxiang and
Xiao, Li and
Chen, Zhipeng and
Ma, Wentao and
Wang, Shijin and
Hu, Guoping},
booktitle = {Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
month = {nov},
year = {2019},
address = {Hong Kong, China},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/D19-1600},
doi = {10.18653/v1/D19-1600},
pages = {5886--5891}} | null | 13 | 783 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- zh
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: cmrc-2018
pretty_name: Chinese Machine Reading Comprehension 2018
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 15508110
num_examples: 10142
- name: validation
num_bytes: 5183809
num_examples: 3219
- name: test
num_bytes: 1606931
num_examples: 1002
download_size: 11508117
dataset_size: 22298850
---
# Dataset Card for "cmrc2018"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/ymcui/cmrc2018](https://github.com/ymcui/cmrc2018)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 11.50 MB
- **Size of the generated dataset:** 22.31 MB
- **Total amount of disk used:** 33.83 MB
### Dataset Summary
A Span-Extraction dataset for Chinese machine reading comprehension to add language
diversities in this area. The dataset is composed by near 20,000 real questions annotated
on Wikipedia paragraphs by human experts. We also annotated a challenge set which
contains the questions that need comprehensive understanding and multi-sentence
inference throughout the context.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 11.50 MB
- **Size of the generated dataset:** 22.31 MB
- **Total amount of disk used:** 33.83 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [11, 11],
"text": ["光荣和ω-force", "光荣和ω-force"]
},
"context": "\"《战国无双3》()是由光荣和ω-force开发的战国无双系列的正统第三续作。本作以三大故事为主轴,分别是以武田信玄等人为主的《关东三国志》,织田信长等人为主的《战国三杰》,石田三成等人为主的《关原的年轻武者》,丰富游戏内的剧情。此部份专门介绍角色,欲知武...",
"id": "DEV_0_QUERY_0",
"question": "《战国无双3》是由哪两个公司合作开发的?"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train | validation | test |
| ------- | ----: | ---------: | ---: |
| default | 10142 | 3219 | 1002 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{cui-emnlp2019-cmrc2018,
title = "A Span-Extraction Dataset for {C}hinese Machine Reading Comprehension",
author = "Cui, Yiming and
Liu, Ting and
Che, Wanxiang and
Xiao, Li and
Chen, Zhipeng and
Ma, Wentao and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1600",
doi = "10.18653/v1/D19-1600",
pages = "5886--5891",
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
result-kand2-sdxl-wuerst-karlo/b5ddd948 | 2023-09-15T04:06:31.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 783 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 205
num_examples: 10
download_size: 1388
dataset_size: 205
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b5ddd948"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EleutherAI/pile-duped-pythia-random-sampled | 2023-08-25T08:07:30.000Z | [
"region:us"
] | EleutherAI | null | null | null | 1 | 782 | ---
dataset_info:
features:
- name: Index
dtype: int64
- name: 70M
dtype: float64
- name: 160M
dtype: float64
- name: 410M
dtype: float64
- name: 1B
dtype: float64
- name: 1.4B
dtype: float64
- name: 2.8B
dtype: float64
- name: 6.9B
dtype: float64
- name: 12B
dtype: float64
- name: Tokens
sequence: uint16
splits:
- name: train
num_bytes: 1020000000
num_examples: 5000000
download_size: 915501044
dataset_size: 1020000000
---
# Dataset Card for "pile-duped-pythia-random-sampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.