id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
result-kand2-sdxl-wuerst-karlo/9208a1cc | 2023-10-09T14:11:16.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 174
num_examples: 10
download_size: 1342
dataset_size: 174
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "9208a1cc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hebrew_sentiment | 2023-01-25T14:32:05.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:he",
"license:mit",
"region:us"
] | null | HebrewSentiment is a data set consists of 12,804 user comments to posts on the official Facebook page of Israel’s
president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder,
2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014,
the first three months of Rivlin’s presidency.2 While the president’s posts aimed at reconciling tensions
and called for tolerance and empathy, the sentiment expressed in the comments to the president’s posts
was polarized between citizens who warmly thanked the president, and citizens that fiercely critiqued his
policy. Of the 12,804 comments, 370 are neutral; 8,512 are positive, 3,922 negative.
Data Annotation: A trained researcher examined each comment and determined its sentiment value,
where comments with an overall positive sentiment were assigned the value 1, comments with an overall
negative sentiment were assigned the value -1, and comments that are off-topic to the post’s content
were assigned the value 0. We validated the coding scheme by asking a second trained researcher to
code the same data. There was substantial agreement between raters (N of agreements: 10623, N of
disagreements: 2105, Coehn’s Kappa = 0.697, p = 0). | @inproceedings{amram-etal-2018-representations,
title = "Representations and Architectures in Neural Sentiment Analysis for Morphologically Rich Languages: A Case Study from {M}odern {H}ebrew",
author = "Amram, Adam and
Ben David, Anat and
Tsarfaty, Reut",
booktitle = "Proceedings of the 27th International Conference on Computational Linguistics",
month = aug,
year = "2018",
address = "Santa Fe, New Mexico, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/C18-1190",
pages = "2242--2252",
abstract = "This paper empirically studies the effects of representation choices on neural sentiment analysis for Modern Hebrew, a morphologically rich language (MRL) for which no sentiment analyzer currently exists. We study two dimensions of representational choices: (i) the granularity of the input signal (token-based vs. morpheme-based), and (ii) the level of encoding of vocabulary items (string-based vs. character-based). We hypothesise that for MRLs, languages where multiple meaning-bearing elements may be carried by a single space-delimited token, these choices will have measurable effects on task perfromance, and that these effects may vary for different architectural designs {---} fully-connected, convolutional or recurrent. Specifically, we hypothesize that morpheme-based representations will have advantages in terms of their generalization capacity and task accuracy, due to their better OOV coverage. To empirically study these effects, we develop a new sentiment analysis benchmark for Hebrew, based on 12K social media comments, and provide two instances of these data: in token-based and morpheme-based settings. Our experiments show that representation choices empirical effects vary with architecture type. While fully-connected and convolutional networks slightly prefer token-based settings, RNNs benefit from a morpheme-based representation, in accord with the hypothesis that explicit morphological information may help generalize. Our endeavour also delivers the first state-of-the-art broad-coverage sentiment analyzer for Hebrew, with over 89% accuracy, alongside an established benchmark to further study the effects of linguistic representation choices on neural networks{'} task performance.",
} | null | 2 | 22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- he
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: modern-hebrew-sentiment-dataset
pretty_name: HebrewSentiment
dataset_info:
- config_name: token
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': pos
'1': neg
'2': off-topic
splits:
- name: train
num_bytes: 2159738
num_examples: 10244
- name: test
num_bytes: 540883
num_examples: 2560
download_size: 2593643
dataset_size: 2700621
- config_name: morph
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': pos
'1': neg
'2': off-topic
splits:
- name: train
num_bytes: 2258128
num_examples: 10221
- name: test
num_bytes: 571401
num_examples: 2555
download_size: 2722672
dataset_size: 2829529
---
# Dataset Card for HebrewSentiment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew
- **Repository:** https://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew
- **Paper:** http://aclweb.org/anthology/C18-1190
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
HebrewSentiment is a data set consists of 12,804 user comments to posts on the official Facebook page of Israel’s
president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder,
2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014,
the first three months of Rivlin’s presidency.2 While the president’s posts aimed at reconciling tensions
and called for tolerance and empathy, the sentiment expressed in the comments to the president’s posts
was polarized between citizens who warmly thanked the president, and citizens that fiercely critiqued his
policy. Of the 12,804 comments, 370 are neutral; 8,512 are positive, 3,922 negative.
Data Annotation:
### Supported Tasks and Leaderboards
Sentiment Analysis
### Languages
Hebrew
## Dataset Structure
tsv format:
{hebrew_sentence}\t{sentiment_label}
### Data Instances
רובי הייתי רוצה לראות ערביה נישאת ליהודי 1
תמונה יפיפיה-שפו 0
חייבים לעשות סוג של חרם כשכתבים שונאי ישראל עולים לשידור צריכים להעביר לערוץ אחר ואז תראו מה יעשה כוחו של הרייטינג ( בהקשר לדבריה של רינה מצליח ) 2
### Data Fields
- `text`: The modern hebrew inpput text.
- `label`: The sentiment label. 0=positive , 1=negative, 2=off-topic.
### Data Splits
| | train | test |
|--------------------------|--------|---------|
| HebrewSentiment (token) | 10243 | 2559 |
| HebrewSentiment (morph) | 10243 | 2559 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
User comments to posts on the official Facebook page of Israel’s
president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder,
2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014,
the first three months of Rivlin’s presidency.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
A trained researcher examined each comment and determined its sentiment value,
where comments with an overall positive sentiment were assigned the value 0, comments with an overall
negative sentiment were assigned the value 1, and comments that are off-topic to the post’s content
were assigned the value 2. We validated the coding scheme by asking a second trained researcher to
code the same data. There was substantial agreement between raters (N of agreements: 10623, N of
disagreements: 2105, Coehn’s Kappa = 0.697, p = 0).
#### Who are the annotators?
Researchers
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
OMIlab, The Open University of Israel
### Licensing Information
MIT License
Copyright (c) 2018 OMIlab, The Open University of Israel
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
### Citation Information
@inproceedings{amram-etal-2018-representations,
title = "Representations and Architectures in Neural Sentiment Analysis for Morphologically Rich Languages: A Case Study from {M}odern {H}ebrew",
author = "Amram, Adam and
Ben David, Anat and
Tsarfaty, Reut",
booktitle = "Proceedings of the 27th International Conference on Computational Linguistics",
month = aug,
year = "2018",
address = "Santa Fe, New Mexico, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/C18-1190",
pages = "2242--2252",
abstract = "This paper empirically studies the effects of representation choices on neural sentiment analysis for Modern Hebrew, a morphologically rich language (MRL) for which no sentiment analyzer currently exists. We study two dimensions of representational choices: (i) the granularity of the input signal (token-based vs. morpheme-based), and (ii) the level of encoding of vocabulary items (string-based vs. character-based). We hypothesise that for MRLs, languages where multiple meaning-bearing elements may be carried by a single space-delimited token, these choices will have measurable effects on task perfromance, and that these effects may vary for different architectural designs {---} fully-connected, convolutional or recurrent. Specifically, we hypothesize that morpheme-based representations will have advantages in terms of their generalization capacity and task accuracy, due to their better OOV coverage. To empirically study these effects, we develop a new sentiment analysis benchmark for Hebrew, based on 12K social media comments, and provide two instances of these data: in token-based and morpheme-based settings. Our experiments show that representation choices empirical effects vary with architecture type. While fully-connected and convolutional networks slightly prefer token-based settings, RNNs benefit from a morpheme-based representation, in accord with the hypothesis that explicit morphological information may help generalize. Our endeavour also delivers the first state-of-the-art broad-coverage sentiment analyzer for Hebrew, with over 89{\%} accuracy, alongside an established benchmark to further study the effects of linguistic representation choices on neural networks{'} task performance.",
}
### Contributions
Thanks to [@elronbandel](https://github.com/elronbandel) for adding this dataset. |
sepedi_ner | 2023-01-25T14:44:06.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:nso",
"license:other",
"region:us"
] | null | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | @inproceedings{sepedi_ner,
author = {D.J. Prinsloo and
Roald Eiselen},
title = {NCHLT Sepedi Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/328},
} | null | 1 | 22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- nso
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Sepedi NER Corpus
license_details: Creative Commons Attribution 2.5 South Africa License
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: sepedi_ner
splits:
- name: train
num_bytes: 3378134
num_examples: 7117
download_size: 22077376
dataset_size: 3378134
---
# Dataset Card for Sepedi NER Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sepedi Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/328)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za)
### Dataset Summary
The Sepedi Ner Corpus is a Sepedi dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Sepedi language. The dataset uses CoNLL shared task annotation standards.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Sesotho sa Leboa (Sepedi).
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
```
{'id': '0',
'ner_tags': [0, 0, 0, 0, 0],
'tokens': ['Maikemišetšo', 'a', 'websaete', 'ya', 'ditirelo']
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.
### Data Splits
The data was not split.
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - sepedi.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data is based on South African government domain and was crawled from gov.za websites.
#### Who are the source language producers?
The data was produced by writers of South African government websites - gov.za
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated during the NCHLT text resource development project.
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).
See: [more information](http://www.nwu.ac.za/ctext)
### Licensing Information
The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode)
### Citation Information
```
@inproceedings{sepedi_ner_corpus,
author = {D.J. Prinsloo and
Roald Eiselen},
title = {NCHLT Sepedi Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/328},
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset. |
SetFit/tweet_eval_stance | 2022-01-17T13:01:36.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 22 | # tweet_eval_stance_abortion
This is the stance_abortion subset of [tweet_eval](https://huggingface.co/datasets/tweet_eval) |
mozilla-foundation/common_voice_6_0 | 2023-07-29T16:00:06.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | null | 0 | 22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- n<1K
ar:
- 10K<n<100K
as:
- n<1K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 10K<n<100K
es:
- 100K<n<1M
et:
- 10K<n<100K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fi:
- 1K<n<10K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
hi:
- n<1K
hsb:
- 1K<n<10K
hu:
- 1K<n<10K
ia:
- 1K<n<10K
id:
- 10K<n<100K
it:
- 100K<n<1M
ja:
- 1K<n<10K
ka:
- 1K<n<10K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
lg:
- 1K<n<10K
lt:
- 1K<n<10K
lv:
- 1K<n<10K
mn:
- 10K<n<100K
mt:
- 10K<n<100K
nl:
- 10K<n<100K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 10K<n<100K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 1K<n<10K
ru:
- 10K<n<100K
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 10K<n<100K
ta:
- 10K<n<100K
th:
- 10K<n<100K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
uk:
- 10K<n<100K
vi:
- 1K<n<10K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- 10K<n<100K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 6.0
language_bcp47:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- th
- tr
- tt
- uk
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 6.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 9261 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 7327 validated hours in 60 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Assamese, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, Finnish, French, Frisian, Georgian, German, Greek, Hakha Chin, Hindi, Hungarian, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Lithuanian, Luganda, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_6_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
persiannlp/parsinlu_translation_fa_en | 2022-10-24T17:01:27.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:fa",
"multilinguality:en",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:2012.06154",
"region:us"
] | persiannlp | A Persian translation dataset (Persian -> English). | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | null | 0 | 22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- fa
- en
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# Dataset Card for PersiNLU (Machine Translation)
## Table of Contents
- [Dataset Card for PersiNLU (Machine Translation)](#dataset-card-for-persi_nlu_machine_translation)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian translation dataset (English -> Persian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`) and English (`en`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"source": "چه زحمتها که بکشد تا منابع مالی را تامین کند اصطلاحات را ترویج کند نهادهایی به راه اندازد.",
"targets": ["how toil to raise funds, propagate reforms, initiate institutions!"],
"category": "mizan_dev_en_fa"
}
```
### Data Fields
- `source`: the input sentences, in Persian.
- `targets`: the list of gold target translations in English.
- `category`: the source from which the example is mined.
### Data Splits
The train/dev/test split contains 1,622,281/2,138/47,745 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
hazal/Turkish-Biomedical-corpus-trM | 2022-08-10T11:13:22.000Z | [
"language:tr",
"region:us"
] | hazal | null | null | null | 3 | 22 | ---
language:
- tr
--- |
jglaser/pdbbind_complexes | 2022-05-14T20:15:20.000Z | [
"molecules",
"chemistry",
"SMILES",
"region:us"
] | jglaser | A dataset to fine-tune language models on protein-ligand binding affinity and contact prediction. | @InProceedings{huggingface:dataset,
title = {jglaser/pdbbind_complexes},
author={Jens Glaser, ORNL
},
year={2022}
} | null | 0 | 22 | ---
tags:
- molecules
- chemistry
- SMILES
---
## How to use the data sets
This dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes.
SMILES are assumed to be tokenized by the regex from P. Schwaller
Every (x,y,z) ligand coordinate maps onto a SMILES token, and is *nan* if the token does not represent an atom
Every receptor coordinate maps onto the Calpha coordinate of that residue.
The dataset can be used to fine-tune a language model, all data comes from PDBind-cn.
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/pdbbind_complexes",split='train[:90%]')
validation = load_dataset("jglaser/pdbbind_complexes",split='train[90%:]')
```
### Pre-process yourself
To manually perform the preprocessing, download the data sets from P.DBBind-cn
Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in `pdbbind/data`
Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 pdbbind.py`).
|
joelniklaus/brazilian_court_decisions | 2022-09-22T13:43:42.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:other",
"arxiv:1905.10348",
"region:us"
] | joelniklaus | null | null | null | 7 | 22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- pt
license:
- 'other'
multilinguality:
- monolingual
pretty_name: predicting-brazilian-court-decisions
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for predicting-brazilian-court-decisions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/lagefreitas/predicting-brazilian-court-decisions
- **Paper:** Lage-Freitas, A., Allende-Cid, H., Santana, O., & Oliveira-Lage, L. (2022). Predicting Brazilian Court
Decisions. PeerJ. Computer Science, 8, e904–e904. https://doi.org/10.7717/peerj-cs.904
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
The dataset is a collection of 4043 *Ementa* (summary) court decisions and their metadata from
the *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil). The court decisions are labeled
according to 7 categories and whether the decisions were unanimous on the part of the judges or not. The dataset
supports the task of Legal Judgment Prediction.
### Supported Tasks and Leaderboards
Legal Judgment Prediction
### Languages
Brazilian Portuguese
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present (train, validation and test) for each configuration.
### Data Fields
The dataset contains the following fields:
- `process_number`: A number assigned to the decision by the court
- `orgao_julgador`: Judging Body: one of '1ª Câmara Cível', '2ª Câmara Cível', '3ª Câmara Cível', 'Câmara Criminal', '
Tribunal Pleno', 'Seção Especializada Cível'
- `publish_date`: The date, when the decision has been published (14/12/2018 - 03/04/2019). At that time (in 2018-2019),
the scraping script was limited and not configurable to get data based on date range. Therefore, only the data from
the last months has been scraped.
- `judge_relator`: Judicial panel
- `ementa_text`: Summary of the court decision
- `decision_description`: **Suggested input**. Corresponds to ementa_text - judgment_text - unanimity_text. Basic
statistics (number of words): mean: 119, median: 88, min: 12, max: 1400
- `judgment_text`: The text used for determining the judgment label
- `judgment_label`: **Primary suggested label**. Labels that can be used to train a model for judgment prediction:
- `no`: The appeal was denied
- `partial`: For partially favourable decisions
- `yes`: For fully favourable decisions
- removed labels (present in the original dataset):
- `conflito-competencia`: Meta-decision. For example, a decision just to tell that Court A should rule this case
and not Court B.
- `not-cognized`: The appeal was not accepted to be judged by the court
- `prejudicada`: The case could not be judged for any impediment such as the appealer died or gave up on the
case for instance.
- `unanimity_text`: Portuguese text to describe whether the decision was unanimous or not.
- `unanimity_label`: **Secondary suggested label**. Unified labels to describe whether the decision was unanimous or
not (in some cases contains ```not_determined```); they can be used for model training as well (Lage-Freitas et al.,
2019).
### Data Splits
The data has been split randomly into 80% train (3234), 10% validation (404), 10% test (405).
There are two tasks possible for this dataset.
#### Judgment
Label Distribution
| judgment | train | validation | test |
|:----------|---------:|-----------:|--------:|
| no | 1960 | 221 | 234 |
| partial | 677 | 96 | 93 |
| yes | 597 | 87 | 78 |
| **total** | **3234** | **404** | **405** |
#### Unanimity
In this configuration, all cases that have `not_determined` as `unanimity_label` can be removed.
Label Distribution
| unanimity_label | train | validation | test |
|:-----------------|----------:|---------------:|---------:|
| not_determined | 1519 | 193 | 201 |
| unanimity | 1681 | 205 | 200 |
| not-unanimity | 34 | 6 | 4 |
| **total** | **3234** | **404** | **405** |
## Dataset Creation
### Curation Rationale
This dataset was created to further the research on developing models for predicting Brazilian court decisions that are
also able to predict whether the decision will be unanimous.
### Source Data
The data was scraped from *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil).
#### Initial Data Collection and Normalization
*“We developed a Web scraper for collecting data from Brazilian courts. The scraper first searched for the URL that
contains the list of court cases […]. Then, the scraper extracted from these HTML files the specific case URLs and
downloaded their data […]. Next, it extracted the metadata and the contents of legal cases and stored them in a CSV file
format […].”* (Lage-Freitas et al., 2022)
#### Who are the source language producers?
The source language producer are presumably attorneys, judges, and other legal professionals.
### Annotations
#### Annotation process
The dataset was not annotated.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The court decisions might contain sensitive information about individuals.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
## Additional Information
Lage-Freitas, A., Allende-Cid, H., Santana Jr, O., & Oliveira-Lage, L. (2019). Predicting Brazilian court decisions:
- "In Brazil [...] lower court judges decisions might be appealed to Brazilian courts (*Tribiunais de Justiça*) to be
reviewed by second instance court judges. In an appellate court, judges decide together upon a case and their
decisions are compiled in Agreement reports named *Acóordãos*."
### Dataset Curators
The names of the original dataset curators and creators can be found in references given below, in the section *Citation
Information*. Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch)
; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch)
; [Github](https://github.com/kapllan)).
### Licensing Information
No licensing information was provided for this dataset. However, please make sure that you use the dataset according to
Brazilian law.
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.1905.10348,
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and de Oliveira-Lage, L{\'{i}}via},
doi = {10.48550/ARXIV.1905.10348},
keywords = {Computation and Language (cs.CL),FOS: Computer and information sciences,Social and Information Networks (cs.SI)},
publisher = {arXiv},
title = {{Predicting Brazilian court decisions}},
url = {https://arxiv.org/abs/1905.10348},
year = {2019}
}
```
```
@article{Lage-Freitas2022,
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and Oliveira-Lage, L{\'{i}}via},
doi = {10.7717/peerj-cs.904},
issn = {2376-5992},
journal = {PeerJ. Computer science},
keywords = {Artificial intelligence,Jurimetrics,Law,Legal,Legal NLP,Legal informatics,Legal outcome forecast,Litigation prediction,Machine learning,NLP,Portuguese,Predictive algorithms,judgement prediction},
language = {eng},
month = {mar},
pages = {e904--e904},
publisher = {PeerJ Inc.},
title = {{Predicting Brazilian Court Decisions}},
url = {https://pubmed.ncbi.nlm.nih.gov/35494851 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044329/},
volume = {8},
year = {2022}
}
```
### Contributions
Thanks to [@kapllan](https://github.com/kapllan) and [@joelniklaus](https://github.com/joelniklaus) for adding this
dataset.
|
tner/btc | 2022-11-27T19:07:36.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"region:us"
] | tner | [BTC](https://aclanthology.org/C16-1111/) | @inproceedings{derczynski-etal-2016-broad,
title = "Broad {T}witter Corpus: A Diverse Named Entity Recognition Resource",
author = "Derczynski, Leon and
Bontcheva, Kalina and
Roberts, Ian",
booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
month = dec,
year = "2016",
address = "Osaka, Japan",
publisher = "The COLING 2016 Organizing Committee",
url = "https://aclanthology.org/C16-1111",
pages = "1169--1179",
abstract = "One of the main obstacles, hampering method development and comparative evaluation of named entity recognition in social media, is the lack of a sizeable, diverse, high quality annotated corpus, analogous to the CoNLL{'}2003 news dataset. For instance, the biggest Ritter tweet corpus is only 45,000 tokens {--} a mere 15{\%} the size of CoNLL{'}2003. Another major shortcoming is the lack of temporal, geographic, and author diversity. This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. The corpus is released openly, including source text and intermediate annotations.",
} | null | 1 | 22 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: BTC
---
# Dataset Card for "tner/btc"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/)
- **Dataset:** Broad Twitter Corpus
- **Domain:** Twitter
- **Number of Entity:** 3
### Dataset Summary
Broad Twitter Corpus NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `LOC`, `ORG`, `PER`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['I', 'hate', 'the', 'words', 'chunder', ',', 'vomit', 'and', 'puke', '.', 'BUUH', '.'],
'tags': [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/btc/raw/main/dataset/label.json).
```python
{
"B-LOC": 0,
"B-ORG": 1,
"B-PER": 2,
"I-LOC": 3,
"I-ORG": 4,
"I-PER": 5,
"O": 6
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|btc | 6338| 1001|2000|
### Citation Information
```
@inproceedings{derczynski-etal-2016-broad,
title = "Broad {T}witter Corpus: A Diverse Named Entity Recognition Resource",
author = "Derczynski, Leon and
Bontcheva, Kalina and
Roberts, Ian",
booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
month = dec,
year = "2016",
address = "Osaka, Japan",
publisher = "The COLING 2016 Organizing Committee",
url = "https://aclanthology.org/C16-1111",
pages = "1169--1179",
abstract = "One of the main obstacles, hampering method development and comparative evaluation of named entity recognition in social media, is the lack of a sizeable, diverse, high quality annotated corpus, analogous to the CoNLL{'}2003 news dataset. For instance, the biggest Ritter tweet corpus is only 45,000 tokens {--} a mere 15{\%} the size of CoNLL{'}2003. Another major shortcoming is the lack of temporal, geographic, and author diversity. This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. The corpus is released openly, including source text and intermediate annotations.",
}
``` |
RCC-MSU/collection3 | 2023-01-31T09:47:58.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ru",
"license:other",
"region:us"
] | RCC-MSU | Collection3 is a Russian dataset for named entity recognition annotated with LOC (location), PER (person), and ORG (organization) tags.
Dataset is based on collection Persons-1000 originally containing 1000 news documents labeled only with names of persons.
Additional labels were added by Valerie Mozharova and Natalia Loukachevitch.
Conversion to the IOB2 format and splitting into train, validation and test sets was done by DeepPavlov team.
For more details see https://ieeexplore.ieee.org/document/7584769 and http://labinform.ru/pub/named_entities/index.htm | @inproceedings{mozharova-loukachevitch-2016-two-stage-russian-ner,
author={Mozharova, Valerie and Loukachevitch, Natalia},
booktitle={2016 International FRUCT Conference on Intelligence, Social Media and Web (ISMW FRUCT)},
title={Two-stage approach in Russian named entity recognition},
year={2016},
pages={1-6},
doi={10.1109/FRUCT.2016.7584769}} | null | 4 | 22 | ---
annotations_creators:
- other
language:
- ru
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: Collection3
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: test
num_bytes: 935298
num_examples: 1922
- name: train
num_bytes: 4380588
num_examples: 9301
- name: validation
num_bytes: 1020711
num_examples: 2153
download_size: 878777
dataset_size: 6336597
---
# Dataset Card for Collection3
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Collection3 homepage](http://labinform.ru/pub/named_entities/index.htm)
- **Repository:** [Needs More Information]
- **Paper:** [Two-stage approach in Russian named entity recognition](https://ieeexplore.ieee.org/document/7584769)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Collection3 is a Russian dataset for named entity recognition annotated with LOC (location), PER (person), and ORG (organization) tags. Dataset is based on collection [Persons-1000](http://ai-center.botik.ru/Airec/index.php/ru/collections/28-persons-1000) originally containing 1000 news documents labeled only with names of persons.
Additional labels were obtained using guidelines similar to MUC-7 with web-based tool [Brat](http://brat.nlplab.org/) for collaborative text annotation.
Currently dataset contains 26K annotated named entities (11K Persons, 7K Locations and 8K Organizations).
Conversion to the IOB2 format and splitting into train, validation and test sets was done by [DeepPavlov team](http://files.deeppavlov.ai/deeppavlov_data/collection3_v2.tar.gz).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Russian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"id": "851",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 1, 2, 0, 0, 0],
"tokens": ['Главный', 'архитектор', 'программного', 'обеспечения', '(', 'ПО', ')', 'американского', 'высокотехнологичного', 'гиганта', 'Microsoft', 'Рэй', 'Оззи', 'покидает', 'компанию', '.']
}
```
### Data Fields
- id: a string feature.
- tokens: a list of string features.
- ner_tags: a list of classification labels (int). Full tagset with indices:
```
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6}
```
### Data Splits
|name|train|validation|test|
|---------|----:|---------:|---:|
|Collection3|9301|2153|1922|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{mozharova-loukachevitch-2016-two-stage-russian-ner,
author={Mozharova, Valerie and Loukachevitch, Natalia},
booktitle={2016 International FRUCT Conference on Intelligence, Social Media and Web (ISMW FRUCT)},
title={Two-stage approach in Russian named entity recognition},
year={2016},
pages={1-6},
doi={10.1109/FRUCT.2016.7584769}}
``` |
DFKI-SLT/multitacred | 2023-06-14T07:20:23.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:100K<n<1M",
"source_datasets:DFKI-NLP/tacred",
"language:ar",
"language:de",
"language:es",
"lan... | DFKI-SLT | MultiTACRED is a multilingual version of the large-scale TAC Relation Extraction Dataset
(https://nlp.stanford.edu/projects/tacred). It covers 12 typologically diverse languages from 9 language families,
and was created by the Speech & Language Technology group of DFKI (https://www.dfki.de/slt) by machine-translating the
instances of the original TACRED dataset and automatically projecting their entity annotations. For details of the
original TACRED's data collection and annotation process, see the Stanford paper (https://aclanthology.org/D17-1004/).
Translations are syntactically validated by checking the correctness of the XML tag markup. Any translations with an
invalid tag structure, e.g. missing or invalid head or tail tag pairs, are discarded (on average, 2.3% of the
instances).
Languages covered are: Arabic, Chinese, Finnish, French, German, Hindi, Hungarian, Japanese, Polish,
Russian, Spanish, Turkish. Intended use is supervised relation classification. Audience - researchers.
Please see our ACL paper (https://arxiv.org/abs/2305.04582) for full details.
NOTE: This Datasetreader supports a reduced version of the original TACRED JSON format with the following changes:
- Removed fields: stanford_pos, stanford_ner, stanford_head, stanford_deprel, docid
The motivation for this is that we want to support additional languages, for which these fields were not required
or available. The reader expects the specification of a language-specific configuration specifying the variant
(original, revisited or retacred) and the language (as a two-letter iso code).
The DatasetReader changes the offsets of the following fields, to conform with standard Python usage (see
_generate_examples()):
- subj_end to subj_end + 1 (make end offset exclusive)
- obj_end to obj_end + 1 (make end offset exclusive)
NOTE 2: The MultiTACRED dataset offers an additional 'split', namely the backtranslated test data (translated to a
target language and then back to English). To access this split, use dataset['backtranslated_test'].
You can find the TACRED dataset reader for the English version of the dataset at
https://huggingface.co/datasets/DFKI-SLT/tacred. | @inproceedings{hennig-etal-2023-multitacred,
title = "MultiTACRED: A Multilingual Version of the TAC Relation Extraction Dataset",
author = "Hennig, Leonhard and Thomas, Philippe and Möller, Sebastian",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Online and Toronto, Canada",
publisher = "Association for Computational Linguistics",
}
@inproceedings{zhang-etal-2017-position,
title = "Position-aware Attention and Supervised Data Improve Slot Filling",
author = "Zhang, Yuhao and
Zhong, Victor and
Chen, Danqi and
Angeli, Gabor and
Manning, Christopher D.",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D17-1004",
doi = "10.18653/v1/D17-1004",
pages = "35--45",
}
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
}
@inproceedings{DBLP:conf/aaai/StoicaPP21,
author = {George Stoica and
Emmanouil Antonios Platanios and
Barnab{\'{a}}s P{\'{o}}czos},
title = {Re-TACRED: Addressing Shortcomings of the {TACRED} Dataset},
booktitle = {Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI}
2021, Thirty-Third Conference on Innovative Applications of Artificial
Intelligence, {IAAI} 2021, The Eleventh Symposium on Educational Advances
in Artificial Intelligence, {EAAI} 2021, Virtual Event, February 2-9,
2021},
pages = {13843--13850},
publisher = {{AAAI} Press},
year = {2021},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/17631},
} | null | 1 | 22 | ---
language:
- ar
- de
- es
- fi
- fr
- hi
- hu
- ja
- pl
- ru
- tr
- zh
license: other
license_details: https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf
tags:
- relation extraction
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
pretty_name: MultiTACRED - Multilingual TAC Relation Extraction Dataset
size_categories:
- 100K<n<1M
source_datasets:
- DFKI-NLP/tacred
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: multitacred
dataset_info:
- config_name: original-ar
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 32371641
num_examples: 67736
- name: test
num_bytes: 6895001
num_examples: 15425
- name: validation
num_bytes: 10353930
num_examples: 22502
- name: backtranslated_test
num_bytes: 5687302
num_examples: 15425
download_size: 0
dataset_size: 55307874
- config_name: revisited-ar
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 32371641
num_examples: 67736
- name: test
num_bytes: 6895001
num_examples: 15425
- name: validation
num_bytes: 10353930
num_examples: 22502
- name: backtranslated_test
num_bytes: 5687302
num_examples: 15425
download_size: 157165
dataset_size: 55307874
- config_name: retacred-ar
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 27777106
num_examples: 58171
- name: test
num_bytes: 5950395
num_examples: 13348
- name: validation
num_bytes: 8941018
num_examples: 19480
- name: backtranslated_test
num_bytes: 4906896
num_examples: 13348
download_size: 3702157
dataset_size: 47575415
- config_name: original-de
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 27810245
num_examples: 67253
- name: test
num_bytes: 6043815
num_examples: 15282
- name: validation
num_bytes: 9007367
num_examples: 22343
- name: backtranslated_test
num_bytes: 5467635
num_examples: 15079
download_size: 0
dataset_size: 48329062
- config_name: revisited-de
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 27810245
num_examples: 67253
- name: test
num_bytes: 6043815
num_examples: 15282
- name: validation
num_bytes: 9007367
num_examples: 22343
- name: backtranslated_test
num_bytes: 5467635
num_examples: 15079
download_size: 157165
dataset_size: 48329062
- config_name: retacred-de
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 23935820
num_examples: 57792
- name: test
num_bytes: 5219772
num_examples: 13227
- name: validation
num_bytes: 7794542
num_examples: 19365
- name: backtranslated_test
num_bytes: 4715329
num_examples: 13046
download_size: 3702157
dataset_size: 41665463
- config_name: original-es
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 27586822
num_examples: 65247
- name: test
num_bytes: 5941821
num_examples: 14908
- name: validation
num_bytes: 8921047
num_examples: 21697
- name: backtranslated_test
num_bytes: 5414680
num_examples: 14688
download_size: 0
dataset_size: 47864370
- config_name: revisited-es
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 27586822
num_examples: 65247
- name: test
num_bytes: 5941821
num_examples: 14908
- name: validation
num_bytes: 8921047
num_examples: 21697
- name: backtranslated_test
num_bytes: 5414680
num_examples: 14688
download_size: 157165
dataset_size: 47864370
- config_name: retacred-es
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 23707989
num_examples: 55998
- name: test
num_bytes: 5139146
num_examples: 12907
- name: validation
num_bytes: 7711621
num_examples: 18788
- name: backtranslated_test
num_bytes: 4676107
num_examples: 12722
download_size: 3702157
dataset_size: 41234863
- config_name: original-fi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 25394979
num_examples: 66751
- name: test
num_bytes: 5478260
num_examples: 15083
- name: validation
num_bytes: 8205629
num_examples: 22268
- name: backtranslated_test
num_bytes: 5204235
num_examples: 14462
download_size: 0
dataset_size: 44283103
- config_name: revisited-fi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 25394979
num_examples: 66751
- name: test
num_bytes: 5478260
num_examples: 15083
- name: validation
num_bytes: 8205629
num_examples: 22268
- name: backtranslated_test
num_bytes: 5204235
num_examples: 14462
download_size: 157165
dataset_size: 44283103
- config_name: retacred-fi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 21807425
num_examples: 57332
- name: test
num_bytes: 4724204
num_examples: 13046
- name: validation
num_bytes: 7084020
num_examples: 19278
- name: backtranslated_test
num_bytes: 4475178
num_examples: 12480
download_size: 3702157
dataset_size: 38090827
- config_name: original-fr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 29580179
num_examples: 66856
- name: test
num_bytes: 6409145
num_examples: 15237
- name: validation
num_bytes: 9601199
num_examples: 22298
- name: backtranslated_test
num_bytes: 5535658
num_examples: 15088
download_size: 0
dataset_size: 51126181
- config_name: revisited-fr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 29580179
num_examples: 66856
- name: test
num_bytes: 6409145
num_examples: 15237
- name: validation
num_bytes: 9601199
num_examples: 22298
- name: backtranslated_test
num_bytes: 5535658
num_examples: 15088
download_size: 157165
dataset_size: 51126181
- config_name: retacred-fr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 25484188
num_examples: 57466
- name: test
num_bytes: 5553110
num_examples: 13209
- name: validation
num_bytes: 8323210
num_examples: 19341
- name: backtranslated_test
num_bytes: 4786142
num_examples: 13078
download_size: 3702157
dataset_size: 44146650
- config_name: original-hi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 47358490
num_examples: 67751
- name: test
num_bytes: 10235547
num_examples: 15440
- name: validation
num_bytes: 15362616
num_examples: 22511
- name: backtranslated_test
num_bytes: 5654198
num_examples: 15440
download_size: 0
dataset_size: 78610851
- config_name: revisited-hi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 47358490
num_examples: 67751
- name: test
num_bytes: 10235547
num_examples: 15440
- name: validation
num_bytes: 15362616
num_examples: 22511
- name: backtranslated_test
num_bytes: 5654198
num_examples: 15440
download_size: 157165
dataset_size: 78610851
- config_name: retacred-hi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 40764637
num_examples: 58186
- name: test
num_bytes: 8839508
num_examples: 13363
- name: validation
num_bytes: 13280435
num_examples: 19488
- name: backtranslated_test
num_bytes: 4878649
num_examples: 13363
download_size: 3702157
dataset_size: 67763229
- config_name: original-hu
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26869925
num_examples: 67766
- name: test
num_bytes: 5810768
num_examples: 15436
- name: validation
num_bytes: 8658082
num_examples: 22519
- name: backtranslated_test
num_bytes: 5695172
num_examples: 15436
download_size: 0
dataset_size: 47033947
- config_name: revisited-hu
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26869925
num_examples: 67766
- name: test
num_bytes: 5810768
num_examples: 15436
- name: validation
num_bytes: 8658082
num_examples: 22519
- name: backtranslated_test
num_bytes: 5695172
num_examples: 15436
download_size: 157165
dataset_size: 47033947
- config_name: retacred-hu
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 23084933
num_examples: 58200
- name: test
num_bytes: 5011087
num_examples: 13357
- name: validation
num_bytes: 7476013
num_examples: 19495
- name: backtranslated_test
num_bytes: 4912553
num_examples: 13357
download_size: 3702157
dataset_size: 40484586
- config_name: original-ja
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 31425001
num_examples: 61571
- name: test
num_bytes: 6560885
num_examples: 13701
- name: validation
num_bytes: 9996196
num_examples: 20290
- name: backtranslated_test
num_bytes: 4706581
num_examples: 12913
download_size: 0
dataset_size: 52688663
- config_name: revisited-ja
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 31425001
num_examples: 61571
- name: test
num_bytes: 6560885
num_examples: 13701
- name: validation
num_bytes: 9996196
num_examples: 20290
- name: backtranslated_test
num_bytes: 4706581
num_examples: 12913
download_size: 157165
dataset_size: 52688663
- config_name: retacred-ja
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 26944316
num_examples: 52748
- name: test
num_bytes: 5627890
num_examples: 11815
- name: validation
num_bytes: 8591269
num_examples: 17470
- name: backtranslated_test
num_bytes: 4032503
num_examples: 11138
download_size: 3702157
dataset_size: 45195978
- config_name: original-pl
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26989666
num_examples: 68124
- name: test
num_bytes: 5845988
num_examples: 15509
- name: validation
num_bytes: 8728082
num_examples: 22631
- name: backtranslated_test
num_bytes: 5594933
num_examples: 15509
download_size: 0
dataset_size: 47158669
- config_name: revisited-pl
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26989666
num_examples: 68124
- name: test
num_bytes: 5845988
num_examples: 15509
- name: validation
num_bytes: 8728082
num_examples: 22631
- name: backtranslated_test
num_bytes: 5594933
num_examples: 15509
download_size: 157165
dataset_size: 47158669
- config_name: retacred-pl
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 23161229
num_examples: 58465
- name: test
num_bytes: 5044812
num_examples: 13418
- name: validation
num_bytes: 7535491
num_examples: 19584
- name: backtranslated_test
num_bytes: 4824801
num_examples: 13418
download_size: 3702157
dataset_size: 40566333
- config_name: original-ru
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 36546830
num_examples: 66413
- name: test
num_bytes: 7846828
num_examples: 14995
- name: validation
num_bytes: 11847712
num_examples: 21998
- name: backtranslated_test
num_bytes: 5335337
num_examples: 14703
download_size: 0
dataset_size: 61576707
- config_name: revisited-ru
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 36546830
num_examples: 66413
- name: test
num_bytes: 7846828
num_examples: 14995
- name: validation
num_bytes: 11847712
num_examples: 21998
- name: backtranslated_test
num_bytes: 5335337
num_examples: 14703
download_size: 157165
dataset_size: 61576707
- config_name: retacred-ru
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 31523203
num_examples: 57060
- name: test
num_bytes: 6793985
num_examples: 12975
- name: validation
num_bytes: 10263742
num_examples: 19052
- name: backtranslated_test
num_bytes: 4603168
num_examples: 12724
download_size: 3702157
dataset_size: 53184098
- config_name: original-tr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26093320
num_examples: 67749
- name: test
num_bytes: 5633846
num_examples: 15429
- name: validation
num_bytes: 8403271
num_examples: 22510
- name: backtranslated_test
num_bytes: 5571104
num_examples: 15429
download_size: 0
dataset_size: 45701541
- config_name: revisited-tr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26093320
num_examples: 67749
- name: test
num_bytes: 5633846
num_examples: 15429
- name: validation
num_bytes: 8403271
num_examples: 22510
- name: backtranslated_test
num_bytes: 5571104
num_examples: 15429
download_size: 157165
dataset_size: 45701541
- config_name: retacred-tr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 22386009
num_examples: 58183
- name: test
num_bytes: 4857933
num_examples: 13352
- name: validation
num_bytes: 7257304
num_examples: 19488
- name: backtranslated_test
num_bytes: 4805734
num_examples: 13352
download_size: 3702157
dataset_size: 39306980
- config_name: original-zh
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26159615
num_examples: 65260
- name: test
num_bytes: 5483795
num_examples: 14694
- name: validation
num_bytes: 8348430
num_examples: 21538
- name: backtranslated_test
num_bytes: 5155679
num_examples: 14021
download_size: 0
dataset_size: 45147519
- config_name: revisited-zh
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26159615
num_examples: 65260
- name: test
num_bytes: 5483795
num_examples: 14694
- name: validation
num_bytes: 8348430
num_examples: 21538
- name: backtranslated_test
num_bytes: 5155679
num_examples: 14021
download_size: 157165
dataset_size: 45147519
- config_name: retacred-zh
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 22440419
num_examples: 56049
- name: test
num_bytes: 4717593
num_examples: 12718
- name: validation
num_bytes: 7200681
num_examples: 18642
- name: backtranslated_test
num_bytes: 4441386
num_examples: 12127
download_size: 3702157
dataset_size: 38800079
---
# Dataset Card for "MultiTACRED"
## Dataset Description
- **Homepage:** [https://github.com/DFKI-NLP/MultiTACRED](https://github.com/DFKI-NLP/MultiTACRED)
- **Paper:** [MultiTACRED: A Multilingual Version of the TAC Relation Extraction Dataset](https://arxiv.org/abs/2305.04582)
- **Point of Contact:** See [https://github.com/DFKI-NLP/MultiTACRED](https://github.com/DFKI-NLP/MultiTACRED)
- **Size of downloaded dataset files:** 15.4KB (TACRED-Revisited), 3.7 MB (Re-TACRED)
- **Size of the generated dataset:** 1.7 GB (all languages, all versions)
- **Total amount of disk used:** 1.7 GB (all languages, all versions)
### Dataset Summary
MultiTACRED is a multilingual version of the large-scale [TAC Relation Extraction Dataset](https://nlp.stanford.edu/projects/tacred).
It covers 12 typologically diverse languages from 9 language families, and was created by the
[Speech & Language Technology group of DFKI](https://www.dfki.de/slt) by machine-translating the instances of the
original TACRED dataset and automatically projecting their entity annotations. For details of the original TACRED's
data collection and annotation process, see the [Stanford paper](https://aclanthology.org/D17-1004/). Translations are
syntactically validated by checking the correctness of the XML tag markup. Any translations with an invalid tag
structure, e.g. missing or invalid head or tail tag pairs, are discarded (on average, 2.3% of the instances).
Languages covered are: Arabic, Chinese, Finnish, French, German, Hindi, Hungarian, Japanese, Polish,
Russian, Spanish, Turkish. Intended use is supervised relation classification. Audience - researchers.
Please see [our ACL paper](https://arxiv.org/abs/2305.04582) for full details.
NOTE: This Datasetreader supports a reduced version of the original TACRED JSON format with the following changes:
- Removed fields: stanford_pos, stanford_ner, stanford_head, stanford_deprel, docid
The motivation for this is that we want to support additional languages, for which these fields were not required
or available. The reader expects the specification of a language-specific configuration specifying the variant
(original, revisited or retacred) and the language (as a two-letter iso code).
The DatasetReader changes the offsets of the following fields, to conform with standard Python usage (see
_generate_examples()):
- subj_end to subj_end + 1 (make end offset exclusive)
- obj_end to obj_end + 1 (make end offset exclusive)
NOTE 2: The MultiTACRED dataset offers an additional 'split', namely the backtranslated test data (translated to a
target language and then back to English). To access this split, use dataset['backtranslated_test'].
You can find the TACRED dataset reader for the English version of the dataset at
[https://huggingface.co/datasets/DFKI-SLT/tacred](https://huggingface.co/datasets/DFKI-SLT/tacred).
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [https://paperswithcode.com/sota/relation-extraction-on-multitacred](https://paperswithcode.com/sota/relation-extraction-on-multitacred)
### Languages
The languages in the dataset are Arabic, German, English, Spanish, Finnish, French, Hindi, Hungarian, Japanese, Polish, Russian, Turkish, and Chinese.
All languages except English are machine-translated using either Deepl's or Google's translation APIs.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 15.4KB (TACRED-Revisited), 3.7 MB (Re-TACRED)
- **Size of the generated dataset:** 1.7 GB (all languages, all versions)
- **Total amount of disk used:** 1.7 GB (all languages, all versions)
An example of 'train' looks as follows:
```json
{
"id": "61b3a5c8c9a882dcfcd2",
"token": ["Tom", "Thabane", "trat", "im", "Oktober", "letzten", "Jahres", "zurück", ",", "um", "die", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", "zu", "gründen", ",", "die", "mit", "17", "Abgeordneten", "das", "Wort", "ergriff", ",", "woraufhin", "der", "konstitutionelle", "Monarch", "König", "Letsie", "III.", "das", "Parlament", "auflöste", "und", "Neuwahlen", "ansetzte", "."],
"relation": "org:founded_by",
"subj_start": 11,
"subj_end": 13,
"obj_start": 0,
"obj_end": 1,
"subj_type": "ORGANIZATION",
"obj_type": "PERSON"
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, a `list` of `string` features.
- `relation`: the relation label of this instance, a `string` classification label.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `subj_type`: the NER type of the subject mention, among the types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `obj_type`: the NER type of the object mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
### Data Splits
To miminize dataset bias, TACRED is stratified across years in which the TAC KBP challenge was run.
Languages statistics for the splits differ because not all instances could be translated with the
subject and object entity markup still intact, these were discarded.
| Language | Train | Dev | Test | Backtranslated Test | Translation Engine |
| ----- | ------ | ----- | ---- | ---- | ---- |
| en | 68,124 | 22,631 | 15,509 | - | - |
| ar | 67,736 | 22,502 | 15,425 | 15,425 | Google |
| de | 67,253 | 22,343 | 15,282 | 15,079 | DeepL |
| es | 65,247 | 21,697 | 14,908 | 14,688 | DeepL |
| fi | 66,751 | 22,268 | 15,083 | 14,462 | DeepL |
| fr | 66,856 | 22,298 | 15,237 | 15,088 | DeepL |
| hi | 67,751 | 22,511 | 15,440 | 15,440 | Google |
| hu | 67,766 | 22,519 | 15,436 | 15,436 | Google |
| ja | 61,571 | 20,290 | 13,701 | 12,913 | DeepL |
| pl | 68,124 | 22,631 | 15,509 | 15,509 | Google |
| ru | 66,413 | 21,998 | 14,995 | 14,703 | DeepL |
| tr | 67,749 | 22,510 | 15,429 | 15,429 | Google |
| zh | 65,260 | 21,538 | 14,694 | 14,021 | DeepL |
## Dataset Creation
### Curation Rationale
To enable more research on multilingual Relation Extraction, we generate translations of the TAC relation extraction
dataset using DeepL and Google Translate.
### Source Data
#### Initial Data Collection and Normalization
The instances of this dataset are sentences from the
[original TACRED dataset](https://nlp.stanford.edu/projects/tacred/), which in turn
are sampled from the [corpus](https://catalog.ldc.upenn.edu/LDC2018T03) used in the yearly
[TAC Knowledge Base Population (TAC KBP) challenges](https://tac.nist.gov/2017/KBP/index.html).
#### Who are the source language producers?
Newswire and web texts collected for the [TAC Knowledge Base Population (TAC KBP) challenges](https://tac.nist.gov/2017/KBP/index.html).
### Annotations
#### Annotation process
See the Stanford paper, the TACRED Revisited paper, and the Re-TACRED paper, plus their appendices, for
details on the original annotation process. The translated versions do not change the original labels.
Translations were tokenized with language-specific Spacy models (Spacy 3.1, 'core_news/web_sm' models)
or Trankit (Trankit 1.1.0) when there was no Spacy model for a given language (Hungarian, Turkish, Arabic, Hindi).
#### Who are the annotators?
The original TACRED dataset was annotated by crowd workers, see the [TACRED paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf).
### Personal and Sensitive Information
The [authors](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf) of the original TACRED dataset
have not stated measures that prevent collecting sensitive or offensive text. Therefore, we do
not rule out the possible risk of sensitive/offensive content in the translated data.
## Considerations for Using the Data
### Social Impact of Dataset
not applicable
### Discussion of Biases
The dataset is drawn from web and newswire text, and thus reflects any biases of these original
texts, as well as biases introduced by the MT models.
### Other Known Limitations
not applicable
## Additional Information
### Dataset Curators
The dataset was created by members of the
[DFKI SLT team: Leonhard Hennig, Philippe Thomas, Sebastian Möller, Gabriel Kressin](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology/speech-and-language-technology-staff-members)
### Licensing Information
To respect the copyright of the underlying TACRED dataset, MultiTACRED is released via the
Linguistic Data Consortium ([LDC License](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)).
You can download MultiTACRED from the [LDC MultiTACRED webpage](https://catalog.ldc.upenn.edu/TODO).
If you are an LDC member, the access will be free; otherwise, an access fee of $25 is needed.
### Citation Information
The original dataset:
```
@inproceedings{zhang2017tacred,
author = {Zhang, Yuhao and Zhong, Victor and Chen, Danqi and Angeli, Gabor and Manning, Christopher D.},
booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
title = {Position-aware Attention and Supervised Data Improve Slot Filling},
url = {https://nlp.stanford.edu/pubs/zhang2017tacred.pdf},
pages = {35--45},
year = {2017}
}
```
For the revised version, please also cite:
```
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
}
```
For the Re-TACRED version, please also cite:
```
@inproceedings{DBLP:conf/aaai/StoicaPP21,
author = {George Stoica and
Emmanouil Antonios Platanios and
Barnab{\'{a}}s P{\'{o}}czos},
title = {Re-TACRED: Addressing Shortcomings of the {TACRED} Dataset},
booktitle = {Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI}
2021, Thirty-Third Conference on Innovative Applications of Artificial
Intelligence, {IAAI} 2021, The Eleventh Symposium on Educational Advances
in Artificial Intelligence, {EAAI} 2021, Virtual Event, February 2-9,
2021},
pages = {13843--13850},
publisher = {{AAAI} Press},
year = {2021},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/17631},
}
```
### Contributions
Thanks to [@leonhardhennig](https://github.com/leonhardhennig) for adding this dataset. |
ProGamerGov/StableDiffusion-v1-5-Regularization-Images | 2022-11-26T02:14:20.000Z | [
"license:mit",
"region:us"
] | ProGamerGov | null | null | null | 120 | 22 | ---
license: mit
---
A collection of regularization / class instance datasets for the [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model to use for DreamBooth prior preservation loss training. Files labeled with "mse vae" used the [stabilityai/sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse) VAE. For ease of use, datasets are stored as zip files containing 512x512 PNG images. The number of images in each zip file is specified at the end of the filename.
There is currently a bug where HuggingFace is incorrectly reporting that the datasets are pickled. They are not picked, they are simple ZIP files containing the images.
Currently this repository contains the following datasets (datasets are named after the prompt they used):
Art Styles
* "**artwork style**": 4125 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**artwork style**": 4200 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "text" was also used for this dataset.
* "**artwork style**": 2750 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE.
* "**illustration style**": 3050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**erotic photography**": 2760 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**landscape photography**": 2500 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "b&w, text" was also used for this dataset.
People
* "**person**": 2115 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**woman**": 4420 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**guy**": 4820 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**supermodel**": 4411 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**bikini model**": 4260 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy athlete**": 5020 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**femme fatale**": 4725 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy man**": 3505 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy woman**": 3500 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Animals
* "**kitty**": 5100 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**cat**": 2050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Vehicles
* "**fighter jet**": 1600 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**train**": 2669 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**car**": 3150 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Themes
* "**cyberpunk**": 3040 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
I used the "Generate Forever" feature in [AUTOMATIC1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to create thousands of images for each dataset. Every image in a particular dataset uses the exact same settings, with only the seed value being different.
You can use my regularization / class image datasets with: https://github.com/ShivamShrirao/diffusers, https://github.com/JoePenna/Dreambooth-Stable-Diffusion, https://github.com/TheLastBen/fast-stable-diffusion, and any other DreamBooth projects that have support for prior preservation loss.
|
bigbio/genia_ptm_event_corpus | 2022-12-22T15:44:39.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | Post-translational-modifications (PTM), amino acid modifications of proteins after translation, are one of the posterior processes of protein biosynthesis for many proteins, and they are critical for determining protein function such as its activity state, localization, turnover and interactions with other biomolecules. While there have been many studies of information extraction targeting individual PTM types, there was until recently little effort to address extraction of multiple PTM types at once in a unified framework. | @inproceedings{ohta-etal-2010-event,
title = "Event Extraction for Post-Translational Modifications",
author = "Ohta, Tomoko and
Pyysalo, Sampo and
Miwa, Makoto and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing",
month = jul,
year = "2010",
address = "Uppsala, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W10-1903",
pages = "19--27",
} | null | 0 | 22 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: PTM Events
homepage: http://www.geniaproject.org/other-corpora/ptm-event-corpus
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
- EVENT_EXTRACTION
---
# Dataset Card for PTM Events
## Dataset Description
- **Homepage:** http://www.geniaproject.org/other-corpora/ptm-event-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,COREF,EE
Post-translational-modifications (PTM), amino acid modifications of proteins after translation, are one of the posterior processes of protein biosynthesis for many proteins, and they are critical for determining protein function such as its activity state, localization, turnover and interactions with other biomolecules. While there have been many studies of information extraction targeting individual PTM types, there was until recently little effort to address extraction of multiple PTM types at once in a unified framework.
## Citation Information
```
@inproceedings{ohta-etal-2010-event,
title = "Event Extraction for Post-Translational Modifications",
author = "Ohta, Tomoko and
Pyysalo, Sampo and
Miwa, Makoto and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing",
month = jul,
year = "2010",
address = "Uppsala, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W10-1903",
pages = "19--27",
}
```
|
argilla/tripadvisor-hotel-reviews | 2022-12-07T07:10:56.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | argilla | null | null | null | 1 | 22 | ---
language:
- en
license:
- cc-by-nc-4.0
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 31840239
num_examples: 20491
download_size: 19678149
dataset_size: 31840239
---
# Dataset Card for "tripadvisor-hotel-reviews"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/andrewmvd/trip-advisor-hotel-reviews
- **Paper:** https://zenodo.org/record/1219899
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Hotels play a crucial role in traveling and with the increased access to information new pathways of selecting the best ones emerged.
With this dataset, consisting of 20k reviews crawled from Tripadvisor, you can explore what makes a great hotel and maybe even use this model in your travels!
Citations on a scale from 1 to 5.
### Languages
english
### Citation Information
If you use this dataset in your research, please credit the authors.
Citation
Alam, M. H., Ryu, W.-J., Lee, S., 2016. Joint multi-grain topic sentiment: modeling semantic aspects for online reviews. Information Sciences 339, 206–223.
DOI
License
CC BY NC 4.0
Splash banner
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. |
mrm8488/unnatural-instructions | 2022-12-23T18:09:15.000Z | [
"region:us"
] | mrm8488 | Unnatural Instructions is a dataset of instructions automatically generated by a Large Language model. See full details in the paper: "Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor" (https://arxiv.org/abs/2212.09689) | @misc{honovich2022unnatural,
title = {Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor},
author = {Honovich, Or and Scialom, Thomas and Levy, Omer and Schick, Timo},
url = {https://arxiv.org/abs/2212.09689},
publisher = {arXiv},
year={2022}
} | null | 4 | 22 | ---
dataset_info:
- config_name: default
features:
- name: instruction
dtype: string
- name: instances
list:
- name: instruction_with_input
dtype: string
- name: input
dtype: string
- name: constraints
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 54668900
num_examples: 66010
download_size: 28584196
dataset_size: 54668900
- config_name: core
features:
- name: instruction
dtype: string
- name: instances
sequence:
- name: instruction_with_input
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: constraints
dtype: string
splits:
- name: train
num_bytes: 55461020
num_examples: 66010
download_size: 29679516
dataset_size: 55461020
- config_name: full
features:
- name: instruction
dtype: string
- name: instances
sequence:
- name: instruction_with_input
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: constraints
dtype: string
- name: reformulations
sequence:
- name: instruction
dtype: string
- name: instruction_with_input
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 145864853
num_examples: 66010
download_size: 29679516
dataset_size: 145864853
---
# Dataset Card for "unnatural-instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
indonlp/NusaX-MT | 2023-01-24T17:21:03.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ace",
"language:ban",
"language:bjn",
"language:bug",
"language:en",
"language:id",
... | indonlp | NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.
NusaX-MT is a parallel corpus for training and benchmarking machine translation models across 10 Indonesian local languages + Indonesian and English. The data is presented in csv format with 12 columns, one column for each language. | @misc{winata2022nusax,
title={NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages},
author={Winata, Genta Indra and Aji, Alham Fikri and Cahyawijaya,
Samuel and Mahendra, Rahmad and Koto, Fajri and Romadhony,
Ade and Kurniawan, Kemal and Moeljadi, David and Prasojo,
Radityo Eko and Fung, Pascale and Baldwin, Timothy and Lau,
Jey Han and Sennrich, Rico and Ruder, Sebastian},
year={2022},
eprint={2205.15960},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 5 | 22 | ---
pretty_name: NusaX-MT
annotations_creators:
- expert-generated
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
language:
- ace
- ban
- bjn
- bug
- en
- id
- jv
- mad
- min
- nij
- su
- bbc
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
dataset_info:
features:
- name: id
dtype: string
- name: text_1
dtype: string
- name: text_2
dtype: string
- name: text_1_lang
dtype: string
- name: text_2_lang
dtype: string
---
# Dataset Card for NusaX-MT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub](https://github.com/IndoNLP/nusax/tree/main/datasets/mt)
- **Paper:** [EACL 2022](https://arxiv.org/abs/2205.15960)
- **Point of Contact:** [GitHub](https://github.com/IndoNLP/nusax/tree/main/datasets/mt)
### Dataset Summary
NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.
NusaX-MT is a parallel corpus for training and benchmarking machine translation models across 10 Indonesian local languages + Indonesian and English. The data is presented in csv format with 12 columns, one column for each language.
### Supported Tasks and Leaderboards
- Machine translation for Indonesian languages
### Languages
All possible pairs of the following:
- ace: acehnese,
- ban: balinese,
- bjn: banjarese,
- bug: buginese,
- eng: english,
- ind: indonesian,
- jav: javanese,
- mad: madurese,
- min: minangkabau,
- nij: ngaju,
- sun: sundanese,
- bbc: toba_batak,
## Dataset Creation
### Curation Rationale
There is a shortage of NLP research and resources for the Indonesian languages, despite the country having over 700 languages. With this in mind, we have created this dataset to support future research for the underrepresented languages in Indonesia.
### Source Data
#### Initial Data Collection and Normalization
NusaX-MT is a dataset for machine translation in Indonesian langauges that has been expertly translated by native speakers.
#### Who are the source language producers?
The data was produced by humans (native speakers).
### Annotations
#### Annotation process
NusaX-MT is derived from SmSA, which is the biggest publicly available dataset for Indonesian sentiment analysis. It comprises of comments and reviews from multiple online platforms. To ensure the quality of our dataset, we have filtered it by removing any abusive language and personally identifying information by manually reviewing all sentences. To ensure balance in the label distribution, we randomly picked 1,000 samples through stratified sampling and then translated them to the corresponding languages.
#### Who are the annotators?
Native speakers of both Indonesian and the corresponding languages.
Annotators were compensated based on the number of translated samples.
### Personal and Sensitive Information
Personal information is removed.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
NusaX is created from review text. These data sources may contain some bias.
### Other Known Limitations
No other known limitations
## Additional Information
### Licensing Information
CC-BY-SA 4.0.
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Please contact authors for any information on the dataset.
### Citation Information
```
@misc{winata2022nusax,
title={NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages},
author={Winata, Genta Indra and Aji, Alham Fikri and Cahyawijaya,
Samuel and Mahendra, Rahmad and Koto, Fajri and Romadhony,
Ade and Kurniawan, Kemal and Moeljadi, David and Prasojo,
Radityo Eko and Fung, Pascale and Baldwin, Timothy and Lau,
Jey Han and Sennrich, Rico and Ruder, Sebastian},
year={2022},
eprint={2205.15960},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@afaji](https://github.com/afaji) for adding this dataset.
|
ChristophSchuhmann/essays-with-instructions | 2023-01-26T21:59:21.000Z | [
"license:apache-2.0",
"region:us"
] | ChristophSchuhmann | null | null | null | 8 | 22 | ---
license: apache-2.0
---
|
emozilla/soda_synthetic_dialogue | 2023-02-07T03:54:33.000Z | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|allenai/soda",
"language:en",
"license:mit",
"open-assistant",
"conv... | emozilla | null | null | null | 7 | 22 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- machine-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: "SODA Synthetic Dialogue"
size_categories:
- 1M<n<10M
source_datasets:
- extended|allenai/soda
tags:
- open-assistant
- conversations-summarization
task_categories:
- conversational
task_ids:
- dialogue-generation
dataset_info:
features:
- name: conversation
dtype: string
config_name: soda_synthetic_dialogue_dialogue_modeling
splits:
- name: train
num_bytes: 1524941476
num_examples: 1191582
- name: test
num_bytes: 190565707
num_examples: 148968
- name: validation
num_bytes: 187161878
num_examples: 146346
download_size: 1952926579
dataset_size: 1902669061
---
# Dataset Card for 🥤SODA Synthetic Dialogue
## Dataset Description
- **Repository:**
[Code](https://github.com/LAION-AI/Open-Assistant/openassistant/datasets/soda_synthetic_dialogue/)
## Dataset Summary
🥤SODA Synthetic Dialogue is a set of synthetic dialogues between Assistant and
User. In each conversation, User asks Assistant to perform summarization or
story generation tasks based on a snippet of an existing dialogue, story, or
from a title or theme.
This data was created by synthesizing the dialogues in
[🥤Soda](https://huggingface.co/datasets/allenai/soda) and applying a set of
templates to generate the conversation. The original research paper can be found
[here](https://arxiv.org/pdf/2212.10465v1.pdf).
Example:
```
User: Can you write the next few lines of dialogue for this scene:
Cornell: I know what you're thinking, Dontrell. I can sense your feelings and know exactly what you're going to say or do next. You're thinking about how much you miss your family, and how you wish you could see them again. You're wondering if there's anything you can do to make that happen. And you're also feeling a little guilty, because you know that if you could see them again, it would only be for a short while before they died.
Dontrell: Cornell, how can you possibly know all that? It's like you can read my mind!
Assistant: Sure, the next dialogue for this scene could be:
Cornell: I told you, I can sense your feelings. It's like I have a connection to your thoughts and emotions. I'm not sure why or how it works, but ever since we met, I've been able to understand what you're thinking and feeling pretty easily.
Dontrell: Wow...that's amazing! And a little bit scary too. But it does explain why we've always been so close - even though we've only known each other for a few months.
Cornell: Yeah, I guess our connection is pretty strong. But don't worry, Dontrell - I'll never use it to manipulate or control you in any way. I promise."
Dontrell: That's a relief! I was starting to feel a little bit like you could read my mind, and it was kind of creepy. But if you promise not to use your abilities to control me, then I'm okay with it.
Cornell: I swear, Dontrell. I'll never use my abilities to control or manipulate you in any way. You have my word.
User: And a title?
Assistant: Sure, a title for this dialogue could be:
Cornell is intuitive. Cornell reads Dontrell's mind.
User: How about a short description?
Assistant: Sure, a short description for this dialogue could be:
Cornell knows what Dontrell is thinking. He can sense her feelings and know exactly what she is going to say or do next.
User: What would be one theme of this story?
Assistant: One theme of this story could be:
being intuitive
```
## Usage
The dataset contains one configuration, `dialogue_modeling`, which has a single
text `conversation` feature.
## Source data
The script to synthesize this dataset was originally created by
[ontocord](https://github.com/ontocord) in
[this Colab notebook](https://colab.research.google.com/drive/1Sw3px5dP8whdqT7QMNoqwmqIasZkMbJi?usp=sharing)
and prepared for Hugging Face by
[Jeffrey Quesnelle](https://github.com/jquesnelle/).
## Citation
Please cite our work if you find the resources in this repository useful:
```
@article{ontocord2023sodasynth,
author = {ontocord and Jeffrey Quesnelle},
title = {SODA Synthetic Dialogue},
year = {2023}
}
``` |
sartajekram/BanglaRQA | 2023-05-06T19:04:32.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:human",
"size_categories:10K<n<100K",
"language:bn",
"license:cc-by-nc-sa-4.0",
"region:us"
] | sartajekram | BanglaRQA is a human-annotated Bangla Question Answering (QA) dataset with diverse question-answer types. | @inproceedings{ekram-etal-2022-banglarqa,
title = "{B}angla{RQA}: A Benchmark Dataset for Under-resourced {B}angla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types",
author = "Ekram, Syed Mohammed Sartaj and
Rahman, Adham Arik and
Altaf, Md. Sajid and
Islam, Mohammed Saidul and
Rahman, Mehrab Mustafy and
Rahman, Md Mezbaur and
Hossain, Md Azam and
Kamal, Abu Raihan Mostofa",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.186",
pages = "2518--2532",
abstract = "High-resource languages, such as English, have access to a plethora of datasets with various question-answer types resembling real-world reading comprehension. However, there is a severe lack of diverse and comprehensive question-answering datasets in under-resourced languages like Bangla. The ones available are either translated versions of English datasets with a niche answer format or created by human annotations focusing on a specific domain, question type, or answer type. To address these limitations, this paper introduces BanglaRQA, a reading comprehension-based Bangla question-answering dataset with various question-answer types. BanglaRQA consists of 3,000 context passages and 14,889 question-answer pairs created from those passages. The dataset comprises answerable and unanswerable questions covering four unique categories of questions and three types of answers. In addition, this paper also implemented four different Transformer models for question-answering on the proposed dataset. The best-performing model achieved an overall 62.42{\%} EM and 78.11{\%} F1 score. However, detailed analyses showed that the performance varies across question-answer types, leaving room for substantial improvement of the model performance. Furthermore, we demonstrated the effectiveness of BanglaRQA as a training resource by showing strong results on the bn{\_}squad dataset. Therefore, BanglaRQA has the potential to contribute to the advancement of future research by enhancing the capability of language models. The dataset and codes are available at https://github.com/sartajekram419/BanglaRQA",
} | null | 0 | 22 | ---
annotations_creators:
- human
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
language:
- bn
size_categories:
- 10K<n<100K
---
# Dataset Card for `BanglaRQA`
## Table of Contents
- [Dataset Card for `BanglaRQA`](#dataset-card-for-BanglaRQA)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [https://github.com/sartajekram419/BanglaRQA](https://github.com/sartajekram419/BanglaRQA)
- **Paper:** [BanglaRQA: A Benchmark Dataset for Under-resourced Bangla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types](https://aclanthology.org/2022.findings-emnlp.186)
### Dataset Summary
This is a human-annotated Bangla Question Answering (QA) dataset with diverse question-answer types.
### Languages
* `Bangla`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("sartajekram/BanglaRQA")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
'passage_id': 'bn_wiki_2977',
'title': 'ফাজিল পরীক্ষা',
'context': 'ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা। ফাজিল পরীক্ষা বাংলাদেশে ডিগ্রি সমমানের, কখনো স্নাতক সমমানের একটি পরীক্ষা, যা একটি ফাজিল মাদ্রাসায় অনুষ্ঠিত হয়ে থাকে। তবে ভারতে ফাজিল পরীক্ষাকে উচ্চ মাধ্যমিক শ্রেণীর (১১ বা ১২ ক্লাস) মান বলে বিবেচিত করা হয়। ফাজিল পরীক্ষা বাংলাদেশ ভারত ও পাকিস্তানের সরকারি স্বীকৃত আলিয়া মাদরাসায় প্রচলিত রয়েছে। বাংলাদেশের ফাজিল পরীক্ষা ইসলামি আরবি বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়ে থাকে ও ভারতের ফাজিল পরীক্ষা পশ্চিমবঙ্গ মাদ্রাসা শিক্ষা পর্ষদের অধীনে অনুষ্ঠিত হয়ে থাকে।\n\n১৯৪৭ সালে ঢাকা আলিয়া মাদ্রাসা ঢাকায় স্থানান্তরের পূর্বে বাংলাদেশ ও ভারতের ফাজিল পরীক্ষা কলকাতা আলিয়া মাদ্রাসার অধীনে অনুষ্ঠিত হতো। ফাযিল পরীক্ষা বর্তমানে ইসলামি আরবী বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়। যা পূর্বে মাদরাসা বোর্ড ও ইসলামি বিশ্ববিদ্যালয়ের আধীনে অনুষ্ঠিত হত। মাদ্রাসা-ই-আলিয়া ঢাকায় স্থানান্তরিত হলে ১৯৪৮ সালে মাদ্রাসা বোর্ডের ফাজিলগুলো পরীক্ষা ঢাকা বিশ্ববিদ্যালয় কর্তৃক গৃহীত হতো। ১৯৭৫ সালের কুদরত-এ-খুদা শিক্ষা কমিশনের সুপারিশে মাদ্রাসা বোর্ড নিয়ন্ত্রিত আলিয়া মাদ্রাসাসমূহে জাতীয় শিক্ষাক্রম ও বহুমুখী পাঠ্যসূচি প্রবর্তিত করা হয়। ১৯৮০ সালে অনুষ্ঠিত ফাজিল পরীক্ষায় এই পাঠ্যসুচী কার্যকর হয়। এই শিক্ষা কমিশন অনুসারে ফাজিল শ্রেণীতে ইসলামি শিক্ষার পাশাপাশি সাধারণ পাঠ্যসূচী অন্তর্ভুক্ত করে ফাজিল পরীক্ষাকে সাধারণ উচ্চ মাধ্যমিক এইচ এস সির সমমান ঘোষণা করা হয়।\n\n১৯৭৮ সালে অধ্যাপক মুস্তফা বিন কাসিমের নেতৃত্বে সিনিয়র মাদ্রাসা শিক্ষা ব্যবস্থা কমিটি গঠিত হয়। এই কমিটির নির্দেশনায় ১৯৮৪ সালে সাধারণ শিক্ষার স্তরের সঙ্গে বাংলাদেশ মাদ্রাসা বোর্ড নিয়ন্ত্রিত আলিয়া মাদ্রাসা শিক্ষা স্তরের সামঞ্জস্য করা হয়। ফাজিল স্তরকে ২ বছর মেয়াদী কোর্সে উন্নিত করে, মোট ১৬ বছর ব্যাপী আলিয়া মাদ্রাসার পূর্ণাঙ্গ আধুনিক শিক্ষা ব্যবস্থা প্রবর্তন করা হয়। এই কমিশনের মাধ্যমেই সরকার ফাজিল পরীক্ষাকে সাধারণ ডিগ্রি মান ঘোষণা করে।',
'question_id': 'bn_wiki_2977_01',
'question_text': 'ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা ?',
'is_answerable': '1',
'question_type': 'confirmation',
'answers':
{
'answer_text': ['হ্যাঁ', 'হ্যাঁ '],
'answer_type': ['yes/no', 'yes/no']
},
}
```
### Data Splits
| split |count |
|----------|--------|
|`train`| 11,912 |
|`validation`| 1,484 |
|`test`| 1,493 |
## Additional Information
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@inproceedings{ekram-etal-2022-banglarqa,
title = "{B}angla{RQA}: A Benchmark Dataset for Under-resourced {B}angla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types",
author = "Ekram, Syed Mohammed Sartaj and
Rahman, Adham Arik and
Altaf, Md. Sajid and
Islam, Mohammed Saidul and
Rahman, Mehrab Mustafy and
Rahman, Md Mezbaur and
Hossain, Md Azam and
Kamal, Abu Raihan Mostofa",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.186",
pages = "2518--2532",
abstract = "High-resource languages, such as English, have access to a plethora of datasets with various question-answer types resembling real-world reading comprehension. However, there is a severe lack of diverse and comprehensive question-answering datasets in under-resourced languages like Bangla. The ones available are either translated versions of English datasets with a niche answer format or created by human annotations focusing on a specific domain, question type, or answer type. To address these limitations, this paper introduces BanglaRQA, a reading comprehension-based Bangla question-answering dataset with various question-answer types. BanglaRQA consists of 3,000 context passages and 14,889 question-answer pairs created from those passages. The dataset comprises answerable and unanswerable questions covering four unique categories of questions and three types of answers. In addition, this paper also implemented four different Transformer models for question-answering on the proposed dataset. The best-performing model achieved an overall 62.42{\%} EM and 78.11{\%} F1 score. However, detailed analyses showed that the performance varies across question-answer types, leaving room for substantial improvement of the model performance. Furthermore, we demonstrated the effectiveness of BanglaRQA as a training resource by showing strong results on the bn{\_}squad dataset. Therefore, BanglaRQA has the potential to contribute to the advancement of future research by enhancing the capability of language models. The dataset and codes are available at https://github.com/sartajekram419/BanglaRQA",
}
```
|
Axel578/mydt | 2023-02-08T02:19:28.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-nd-4.0",
"conversations-summarization",
"arxiv:1911.12237",
"r... | Axel578 | null | null | null | 0 | 22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: samsum-corpus
pretty_name: SAMSum Corpus
tags:
- conversations-summarization
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
config_name: samsum
splits:
- name: train
num_bytes: 9479141
num_examples: 14732
- name: test
num_bytes: 534492
num_examples: 819
download_size: 2944100
dataset_size: 10530064
train-eval-index:
- config: samsum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
dialogue: text
summary: target
---
# Dataset Card for SAMSum Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://arxiv.org/abs/1911.12237v2
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique id of an example.
### Data Splits
- train: 14732
- val: 818
- test: 819
## Dataset Creation
### Curation Rationale
In paper:
> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
### Source Data
#### Initial Data Collection and Normalization
In paper:
> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
#### Who are the source language producers?
linguists
### Annotations
#### Annotation process
In paper:
> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
#### Who are the annotators?
language experts
### Personal and Sensitive Information
None, see above: Initial Data Collection and Normalization
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
### Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
### Contributions
Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset. |
transformersbook/emotion-train-split | 2023-02-14T18:21:24.000Z | [
"license:apache-2.0",
"region:us"
] | transformersbook | null | null | null | 0 | 22 | ---
license: apache-2.0
---
|
wwydmanski/tabular-letter-recognition | 2023-02-24T09:36:30.000Z | [
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"tabular",
"region:us"
] | wwydmanski | null | null | null | 0 | 22 | ---
task_categories:
- tabular-classification
tags:
- tabular
pretty_name: Tabular letter recognition
size_categories:
- 10K<n<100K
---
## Source:
Creator:
David J. Slate
Odesta Corporation; 1890 Maple Ave; Suite 115; Evanston, IL 60201
Donor:
David J. Slate (dave '@' math.nwu.edu) (708) 491-3867
## Data Set Information:
The objective is to identify each of a large number of black-and-white rectangular pixel displays as one of the 26 capital letters in the English alphabet. The character images were based on 20 different fonts and each letter within these 20 fonts was randomly distorted to produce a file of 20,000 unique stimuli. Each stimulus was converted into 16 primitive numerical attributes (statistical moments and edge counts) which were then scaled to fit into a range of integer values from 0 through 15. We typically train on the first 16000 items and then use the resulting model to predict the letter category for the remaining 4000. See the article cited above for more details.
### Attribute Information:
1. x-box horizontal position of box (integer)
2. y-box vertical position of box (integer)
3. width width of box (integer)
4. high height of box (integer)
5. onpix total # on pixels (integer)
6. x-bar mean x of on pixels in box (integer)
7. y-bar mean y of on pixels in box (integer)
8. x2bar mean x variance (integer)
9. y2bar mean y variance (integer)
10. xybar mean x y correlation (integer)
11. x2ybr mean of x * x * y (integer)
12. xy2br mean of x * y * y (integer)
13. x-ege mean edge count left to right (integer)
14. xegvy correlation of x-ege with y (integer)
15. y-ege mean edge count bottom to top (integer)
16. yegvx correlation of y-ege with x (integer) |
urialon/gov_report_test | 2023-02-28T15:42:26.000Z | [
"region:us"
] | urialon | null | null | null | 0 | 22 | Entry not found |
sedthh/ubuntu_dialogue_qa | 2023-02-28T20:50:15.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"ubuntu",
"forum",
"linux",
"chat",
"region:us"
] | sedthh | null | null | null | 1 | 22 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 4021291
num_examples: 16181
download_size: 2157548
dataset_size: 4021291
license: mit
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- ubuntu
- forum
- linux
- chat
pretty_name: Q&A from the Ubuntu Dialogue Corpus
size_categories:
- 10K<n<100K
---
# Dataset Card for "ubuntu_dialogue_qa"
Filtered the Ubuntu dialogue chatlogs from https://www.kaggle.com/datasets/rtatman/ubuntu-dialogue-corpus to include Q&A pairs **ONLY**
**Acknowledgements**
This dataset was ORIGINALLY collected by Ryan Lowe, Nissan Pow , Iulian V. Serban† and Joelle Pineau. It is made available here under the Apache License, 2.0. If you use this data in your work, please include the following citation:
Ryan Lowe, Nissan Pow, Iulian V. Serban and Joelle Pineau, "The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems", SIGDial 2015. URL: http://www.sigdial.org/workshops/conference16/proceedings/pdf/SIGDIAL40.pdf |
Jacobvs/CelebrityTweets | 2023-03-02T23:01:59.000Z | [
"region:us"
] | Jacobvs | null | null | null | 0 | 22 | Entry not found |
Yulong-W/squadori | 2023-04-01T10:26:03.000Z | [
"region:us"
] | Yulong-W | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | null | 0 | 22 | Entry not found |
mstz/heart | 2023-04-16T17:31:05.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"heart",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_heart_disease_45,
author = {Janosi,Andras, Steinbrunn,William, Pfisterer,Matthias, Detrano,Robert & M.D.,M.D.},
title = {{Heart Disease}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C52P4X}}
} | null | 0 | 22 | ---
language:
- en
tags:
- heart
- tabular_classification
- binary_classification
- UCI
pretty_name: Heart
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- cleveland
- va
- switzerland
- hungary
license: cc
---
# Heart
The [Heart dataset](https://archive.ics.uci.edu/ml/datasets/Heart) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Does the patient have heart disease?
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| hungary | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/heart", "hungary")["train"]
``` |
ruanchaves/reli-sa | 2023-04-13T15:24:11.000Z | [
"region:us"
] | ruanchaves | null | 0 | 22 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for ReLi-SA
## Dataset Description
- **Homepage:** [Corpus ReLi - Linguateca](https://linguateca.pt/Repositorio/ReLi/)
- **Paper:** [Sparkling Vampire... lol! Annotating Opinions in a Book Review Corpus](https://www.linguateca.pt/Repositorio/ReLi/Anais_ELC2012_Freitasetal.pdf)
- **Point of Contact:** [Cláudia Freitas](claudiafreitas@puc-rio.br)
### Dataset Summary
ReLi is a dataset created by Cláudia Freitas within the framework of the project "Semantic Annotators based on Active Learning" at PUC-Rio. It consists of 1,600 book reviews manually annotated for the presence of opinions on the reviewed book and its polarity.
The dataset contains reviews in Brazilian Portuguese on books written by seven authors: Stephenie Meyer, Thalita Rebouças, Sidney Sheldon, Jorge Amado, George Orwell, José Saramago, and J.D. Salinger. The language used in the reviews varies from highly informal, with slang, abbreviations, neologisms, and emoticons, to more formal reviews with a more elaborate vocabulary.
ReLi-SA is an adaptation of the original ReLi dataset for the sentiment analysis task. We attribute a sentiment polarity to each sentence according to the sentiment annotations of its individual tokens.
### Supported Tasks and Leaderboards
- `sentiment-analysis`: The dataset can be used to train a model for sentiment analysis, which consists of classifying the sentiment expressed in a sentence as positive, negative, neutral, or mixed. Success on this task is typically measured by achieving a high [F1 score](https://huggingface.co/metrics/f1).
### Languages
This dataset is in Brazilian Portuguese.
## Dataset Structure
### Data Instances
```json
{
'source': 'ReLi-Orwell.txt',
'title': 'False',
'book': '1984',
'review_id': '0',
'score': 5.0,
'sentence_id': 102583,
'unique_review_id': 'ReLi-Orwell_1984_0',
'sentence': ' Um ótimo livro , além de ser um ótimo alerta para uma potencial distopia , em contraponto a utopia tão sonhada por os homens de o medievo e início de a modernidade .',
'label': 'positive'
}
```
### Data Fields
* `source`: The source file of the review.
* `title`: A boolean field indicating whether the sentence is a review title (True) or not (False).
* `book`: The book that the review is about.
* `review_id`: The review ID within the source file.
* `score`: The score the review attributes to the book.
* `sentence_id`: The sequential ID of the sentence (can be used to sort the sentences within a review).
* `unique_review_id`: A unique ID for the review a sentence belongs to.
* `sentence`: The sentence for which the label indicates the sentiment.
* `label`: The sentiment label, either `positive`, `neutral`, `negative`, or `mixed` if both positive and negative sentiment polarity tokens are found in the sentence.
### Data Splits
The dataset is divided into three splits:
| | train | validation | test |
|------------|--------:|----------:|-------:|
| Instances | 7,875 | 1,348 | 3,288 |
The splits are carefully made to avoid having reviews about a given author appear in more than one split.
## Additional Information
### Citation Information
If you use this dataset in your work, please cite the following publication:
```bibtex
@incollection{freitas2014sparkling,
title={Sparkling Vampire... lol! Annotating Opinions in a Book Review Corpus},
author={Freitas, Cl{\'a}udia and Motta, Eduardo and Milidi{\'u}, Ruy Luiz and C{\'e}sar, Juliana},
booktitle={New Language Technologies and Linguistic Research: A Two-Way Road},
editor={Alu{\'\i}sio, Sandra and Tagnin, Stella E. O.},
year={2014},
publisher={Cambridge Scholars Publishing},
pages={128--146}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | ||
hanamizuki-ai/genshin-voice-v3.5-mandarin | 2023-04-13T14:47:16.000Z | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"region:us"
] | hanamizuki-ai | null | null | null | 4 | 22 | ---
language:
- zh
multilinguality:
- monolingual
pretty_name: Genshin Voice
source_datasets:
- original
task_categories:
- text-to-speech
- automatic-speech-recognition
dataset_info:
features:
- name: audio
dtype: audio
- name: language
dtype: string
- name: npcName
dtype: string
- name: text
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 33310846721.498
num_examples: 67921
download_size: 17251924784
dataset_size: 33310846721.498
---
# Dataset Card for Genshin Voice
## Dataset Description
### Dataset Summary
The Genshin Voice dataset is a text-to-voice dataset of different Genshin Impact characters unpacked from the game.
### Languages
The text in the dataset is in Mandarin.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by unpacking the [Genshin Impact](https://genshin.hoyoverse.com/) game.
#### Who are the source language producers?
The language producers are the employee of [Hoyoverse](https://hoyoverse.com/) and contractors from [EchoSky Studio](http://qx.asiacu.com/).
### Annotations
The dataset contains official annotations from the game, including ingame speaker name and transcripts.
## Additional Information
### Dataset Curators
The dataset was created by [w4123](https://github.com/w4123) initially in his [GitHub repository](https://github.com/w4123/GenshinVoice).
### Licensing Information
Copyright © COGNOSPHERE. All Rights Reserved. |
camel-ai/chemistry | 2023-05-23T21:12:52.000Z | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] | camel-ai | null | null | null | 15 | 22 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: CAMEL Chemistry
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Chemistry dataset is composed of 20K problem-solution pairs obtained using gpt-4. The dataset problem-solutions pairs generating from 25 chemistry topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs.
We provide the data in `chemistry.zip`.
## Data Fields
**The data fields for files in `chemistry.zip` are as follows:**
* `role_1`: assistant role
* `topic`: chemistry topic
* `sub_topic`: chemistry subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/chemistry", repo_type="dataset", filename="chemistry.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
|
kunishou/databricks-dolly-69k-ja-en-translation | 2023-05-19T04:38:09.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | kunishou | null | null | null | 7 | 22 | ---
license: cc-by-sa-3.0
---
This dataset was created by automatically translating "databricks-dolly-15k" into Japanese.
This dataset contains 69K ja-en-translation task data and is licensed under CC BY SA 3.0.
Last Update : 2023-04-18
databricks-dolly-15k-ja
https://github.com/kunishou/databricks-dolly-15k-ja
databricks-dolly-15k
https://github.com/databrickslabs/dolly/tree/master/data |
sradc/chunked-wikipedia20220301en-bookcorpusopen | 2023-05-30T16:52:48.000Z | [
"region:us"
] | sradc | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 26076989556
num_examples: 33536113
download_size: 15221565467
dataset_size: 26076989556
---
# Dataset Card for "chunked-wikipedia20220301en-bookcorpusopen"
```
num_examples: 33.5 million
download_size: 15.3 GB
dataset_size: 26.1 GB
```
This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wikipedia) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen),
and splits the data into smaller chunks, of size ~820 chars
(such that each item will be at least ~128 tokens for the average tokenizer).
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars.
The dataset has been normalized into lower case, with accents and non-english characters removed.
Items with less than 200 chars or more than 1000 chars have been removed.
The data has not been shuffled (you can either use `dataset.shuffle(...)`,
or download the shuffled version [here](https://huggingface.co/datasets/sradc/chunked-shuffled-wikipedia20220301en-bookcorpusopen),
which will be faster to iterate over).
This dataset is processed for convenience, at the expense of losing some percentage of the tokens due to truncation,
(assuming the training minibatches are truncated to 128 tokens). |
Thaweewat/instruction-wild-52k-th | 2023-05-09T19:05:42.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] | Thaweewat | null | null | null | 1 | 22 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- th
tags:
- instruction-finetuning
size_categories:
- 10K<n<100K
---
# Summary
This is a 🇹🇭 Thai-instructed dataset translated from [InstructionWild](https://github.com/XueFuzhao/InstructionWild) using Google Cloud Translation.
It contains 52,191 English and 51,504 Chinese instructions, which are collected from Twitter, where users tend to share their interesting prompts of mostly generation, open QA, and mind-storm types
which also be used by [Colossal AI](https://github.com/hpcaitech/ColossalAI) to train the ColossalChat model.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
--- |
Hemanth-thunder/en_ta | 2023-08-12T06:58:11.000Z | [
"size_categories:10K<n<100K",
"language:ta",
"language:en",
"license:mit",
"region:us"
] | Hemanth-thunder | null | null | null | 2 | 22 | ---
license: mit
language:
- ta
- en
size_categories:
- 10K<n<100K
--- |
mcimpoi/dtd_split_1 | 2023-05-22T12:42:00.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"texture",
"computer-vision",
"region:us"
] | mcimpoi | null | null | null | 0 | 22 | ---
license: cc-by-4.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': banded
'1': blotchy
'2': braided
'3': bubbly
'4': bumpy
'5': chequered
'6': cobwebbed
'7': cracked
'8': crosshatched
'9': crystalline
'10': dotted
'11': fibrous
'12': flecked
'13': freckled
'14': frilly
'15': gauzy
'16': grid
'17': grooved
'18': honeycombed
'19': interlaced
'20': knitted
'21': lacelike
'22': lined
'23': marbled
'24': matted
'25': meshed
'26': paisley
'27': perforated
'28': pitted
'29': pleated
'30': polka-dotted
'31': porous
'32': potholed
'33': scaly
'34': smeared
'35': spiralled
'36': sprinkled
'37': stained
'38': stratified
'39': striped
'40': studded
'41': swirly
'42': veined
'43': waffled
'44': woven
'45': wrinkled
'46': zigzagged
splits:
- name: train
num_bytes: 226313270.04
num_examples: 1880
- name: test
num_bytes: 172035822
num_examples: 1880
- name: validation
num_bytes: 222278767.48
num_examples: 1880
download_size: 629315160
dataset_size: 620627859.52
task_categories:
- image-classification
language:
- en
tags:
- texture
- computer-vision
pretty_name: Describable Textures Dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for Describable Textures Dataset (DTD)
## Dataset Description
- Homepage: https://www.robots.ox.ac.uk/~vgg/data/dtd/
- Repository: https://github.com/mcimpoi/deep-fbanks
- Paper: https://openaccess.thecvf.com/content_cvpr_2014/html/Cimpoi_Describing_Textures_in_2014_CVPR_paper.html
- Leaderboard: https://paperswithcode.com/sota/image-classification-on-dtd
### Dataset Summary
Texture classification dataset; consists of 47 categories, 120 images per class.
### Data Splits
Equally split into train, val, test; The original paper proposed 10 splits; recent works (BYOL, arxiv:2006.07733) use only first split.
### Licensing Information
Not defined at https://www.robots.ox.ac.uk/~vgg/data/dtd/
### Citation Information
@InProceedings{cimpoi14describing,
Author = {M. Cimpoi and S. Maji and I. Kokkinos and S. Mohamed and and A. Vedaldi},
Title = {Describing Textures in the Wild},
Booktitle = {Proceedings of the {IEEE} Conf. on Computer Vision and Pattern Recognition ({CVPR})},
Year = {2014}}
|
doushabao4766/msra_ner_k_V3_wc_bioes | 2023-05-26T11:40:06.000Z | [
"region:us"
] | doushabao4766 | null | null | null | 1 | 22 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': B-ORG
'3': B-LOC
'4': I-PER
'5': I-ORG
'6': I-LOC
'7': E-PER
'8': E-ORG
'9': E-LOC
'10': S-PER
'11': S-ORG
'12': S-LOC
- name: knowledge
dtype: string
- name: token_words
sequence:
sequence: string
- name: knowledge_words
sequence:
sequence: string
splits:
- name: train
num_bytes: 334987989
num_examples: 45000
- name: test
num_bytes: 25028455
num_examples: 3442
download_size: 73312900
dataset_size: 360016444
---
# Dataset Card for "msra_ner_k_V3_wc_bioes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HumanCompatibleAI/ppo-seals-Ant-v0 | 2023-05-29T09:47:39.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float64
splits:
- name: train
num_bytes: 223153705
num_examples: 104
download_size: 47004336
dataset_size: 223153705
---
# Dataset Card for "ppo-seals-Ant-v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
9wimu9/eli5_mult_answers_en | 2023-05-29T20:27:50.000Z | [
"region:us"
] | 9wimu9 | null | null | null | 1 | 22 | ---
dataset_info:
features:
- name: question
dtype: string
- name: contexts
sequence: string
- name: gold_answer
dtype: string
splits:
- name: train
num_bytes: 370188345.3824035
num_examples: 71236
- name: test
num_bytes: 41136657.61759652
num_examples: 7916
download_size: 248739104
dataset_size: 411325003.0
---
# Dataset Card for "eli5_mult_answers_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xwjzds/ag_news | 2023-06-02T20:57:30.000Z | [
"region:us"
] | xwjzds | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': World
'1': Sports
'2': Business
'3': Sci/Tech
splits:
- name: train
num_bytes: 29817303
num_examples: 120000
- name: test
num_bytes: 1879474
num_examples: 7600
download_size: 19820267
dataset_size: 31696777
---
# Dataset Card for "ag_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ejschwartz/oo-method-test | 2023-09-03T14:34:23.000Z | [
"task_categories:text-classification",
"license:bsd",
"region:us"
] | ejschwartz | null | null | null | 1 | 22 | ---
license: bsd
task_categories:
- text-classification
#task_ids:
#- binary-classification
dataset_info:
features:
- name: Binary
dtype: string
- name: Addr
dtype: string
- name: Name
dtype: string
- name: Type
dtype:
class_label:
names:
'0': func
'1': method
- name: Disassembly
dtype: string
config_name: ejschwartz--oo-method-test
splits:
- name: combined
num_bytes: 6054378861
num_examples: 3537794
download_size: 1351783459
dataset_size: 6054378861
train-eval-index:
- config: default # The dataset config name to use. Example for datasets without configs: default. Example for glue: sst2
task: text-classification # The task category name (same as task_category). Example: question-answering
task_id: binary_classification # The AutoTrain task id. Example: extractive_question_answering
splits:
#train_split: train # The split to use for training. Example: train
eval_split: train # The split to use for evaluation. Example: test
col_mapping: # The columns mapping needed to configure the task_id.
Disassembly: text
Type: target
metrics:
- type: accuracy # The metric id. Example: wer. Use metric id from https://hf.co/metrics
name: accuracy # Tne metric name to be displayed. Example: Test WER
---
# Dataset Card for OO Method Test Dataset
## Dataset Description
### Dataset Summary
This dataset describes compiled functions in various [small, simple C++ programs](https://github.com/sei-eschwartz/buildexes/tree/master/tests/src/oo).
These programs were automatically compiled using various versions of Microsoft's Visual C++ compiler and different compilation settings. The details can be found
in the [BuildExes](https://github.com/sei-eschwartz/buildexes) repository.
For each function, the dataset includes a disassembled (using ROSE's `bat-dis` tool) representation of the compiled code, its name, and whether the function is a OO method or not.
**This dataset is largely intended for @ejschwartz to experiment with learning techniques and tools. The programs are artificial and are likely not representative of real programs.**
### Supported Tasks and Leaderboards
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed] |
Kamaljp/amazon_us_3000 | 2023-06-10T02:52:48.000Z | [
"region:us"
] | Kamaljp | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: marketplace
dtype: string
- name: customer_id
dtype: string
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: product_parent
dtype: string
- name: product_title
dtype: string
- name: product_category
dtype: string
- name: star_rating
dtype: int32
- name: helpful_votes
dtype: int32
- name: total_votes
dtype: int32
- name: vine
dtype:
class_label:
names:
'0': N
'1': Y
- name: verified_purchase
dtype:
class_label:
names:
'0': N
'1': Y
- name: review_headline
dtype: string
- name: review_body
dtype: string
- name: review_date
dtype: string
splits:
- name: train
num_bytes: 1391025
num_examples: 3000
download_size: 763643
dataset_size: 1391025
---
# Dataset Card for "amazon_us_3000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
skeskinen/books3_basic_paragraphs | 2023-06-14T12:55:02.000Z | [
"region:us"
] | skeskinen | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: text
dtype: string
- name: book
dtype: string
- name: pos
dtype: float64
- name: smog_index
dtype: float64
splits:
- name: train
num_bytes: 1366299770
num_examples: 6639751
download_size: 676098743
dataset_size: 1366299770
---
# Dataset Card for "books3_basic_paragraphs"
the_pile books3, books with smog grade difficulty estimate of 6.5 or under. Split into paragraphs and filtered out most 'non-paragraphs' like titles, tables of content, etc. |
yuzuai/rakuda-questions | 2023-06-23T08:01:35.000Z | [
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:n<1K",
"source_datasets:original",
"language:ja",
"license:mit",
"region:us"
] | yuzuai | null | null | null | 3 | 22 | ---
license: mit
language:
- ja
pretty_name: Rakuda - Questions for Japanese Models
task_categories:
- conversational
- question-answering
size_categories:
- n<1K
source_datasets:
- original
---
# Rakuda - Questions for Japanese models
**Repository**: [https://github.com/yuzu-ai/japanese-llm-ranking](https://github.com/yuzu-ai/japanese-llm-ranking)
This is a set of 40 questions in Japanese about Japanese-specific topics designed to evaluate the capabilities of AI Assistants in Japanese.
The questions are evenly distributed between four categories: history, society, government, and geography.
Questions in the first three categories are open-ended, while the geography questions are more specific.
Answers to these questions can be used to rank the Japanese abilities of models, in the same way the [vicuna-eval questions](https://lmsys.org/vicuna_eval/) are frequently used to measure the usefulness of assistants.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("yuzuai/rakuda-questions")
print(dataset)
# => DatasetDict({
# train: Dataset({
# features: ['category', 'question_id', 'text'],
# num_rows: 40
# })
# })
```
|
slplab/kscg_small_20v50_16k | 2023-06-24T16:11:13.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | slplab | null | null | null | 0 | 22 | ---
license: cc-by-nc-4.0
---
|
ecnu-icalk/educhat-sft-002-data-osm | 2023-07-01T10:11:46.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | ecnu-icalk | null | null | null | 13 | 22 | ---
license: cc-by-nc-4.0
---
每条数据由一个存放对话的list和与数据对应的system_prompt组成。list中按照Q,A顺序存放对话。
数据来源为开源数据,使用[CleanTool](https://github.com/icalk-nlp/EduChat/tree/main/clean_tool)数据清理工具去重。 |
Amod/hair_medical_sit | 2023-07-20T19:30:20.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:openrail",
"medical",
"region:us"
] | Amod | null | null | null | 0 | 22 | ---
license: openrail
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- medical
size_categories:
- n<1K
---
# Dataset Description
- **Point of Contact:** [amod@silverlineit.co]
## Dataset Summary
This dataset contains information about common hair related diseases. It includes the disease name, the medicine used to treat the disease, the duration of treatment, the severity of the disease, and the common side effects of each medication.
## Supported Tasks and Leaderboards
This dataset supports tasks like medication recommendation, disease diagnosis based on symptoms, etc.
## Languages
The text in the dataset is in English. The text is medical terms and the names of the diseases, medications, and side effects are internationally recognized terms.
# Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
## Data Instances
A data instance has the following structure:
\```json
{
"Hair Diseases": "Alopecia Areata",
"Medicine": "Minoxidil solution",
"Duration": "12 months",
"Severity": "Severe",
"Side Effects": "Scalp irritation, Unwanted hair growth, Dizziness"
}
\```
## Data Fields
- `Hair Diseases`: The name of the hair related disease.
- `Medicine`: The medication used to treat the disease.
- `Duration`: The duration of treatment.
- `Severity`: The severity of the disease.
- `Side Effects`: A list of common side effects of the medication.
## Data Splits
The dataset has not been split into train, test, and validation sets.
# Dataset Creation
## Curation Rationale
The dataset was created to assist in medical research and to aid in disease diagnosis and treatment recommendation.
## Source Data
### Initial Data Collection and Normalization
The dataset was collected from various medical resources and compiled into a structured CSV file.
### Who are the source language producers?
The original language data was produced by medical professionals.
## Annotations
The dataset does not contain any annotations.
# Considerations for Using the Data
## Social Impact of Dataset
The dataset could be used to create systems that provide treatment recommendations for common hair related diseases, helping to improve healthcare outcomes.
## Discussion of Biases
The dataset does not contain any explicit biases as it is based on medical facts. However, it is limited to common hair diseases and their treatments and does not include all possible diseases or treatments.
## Other Known Limitations
The dataset only includes the most common side effects of the medications and does not cover all potential side effects.
# Additional Information
## Dataset Curators
The dataset was curated by [Amod](https://huggingface.co/Amod).
## Citation Information
To the best of our knowledge, this dataset has not been cited in any publications. |
PKU-Alignment/processed-hh-rlhf | 2023-07-15T11:41:32.000Z | [
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"rlhf",
"harmless",
"helpful",
"human-preference",
"region:us"
] | PKU-Alignment | null | null | null | 3 | 22 | ---
license: mit
task_categories:
- conversational
language:
- en
tags:
- rlhf
- harmless
- helpful
- human-preference
pretty_name: hh-rlhf
size_categories:
- 100K<n<1M
---
# Dataset Card for Processed-Hh-RLHF
This is a dataset that processes [hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) into an easy-to-use conversational and human-preference form. |
Andyrasika/Ecommerce_FAQ | 2023-07-18T15:34:42.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | Andyrasika | null | null | null | 2 | 22 | ---
license: creativeml-openrail-m
---
Ecommerce FAQ Chatbot Dataset
Overview
The Ecommerce FAQ Chatbot Dataset is a valuable collection of questions and corresponding answers, meticulously curated for training and evaluating chatbot models in the context of an Ecommerce environment. This dataset is designed to assist developers, researchers, and data scientists in building effective chatbots that can handle customer inquiries related to an Ecommerce platform.
Contents
The dataset comprises a total of 79 question-answer pairs, where each item consists of:
Question: The user's query related to the Ecommerce platform.
Answer: The appropriate response or solution provided by the Ecommerce chatbot.
The questions cover a wide range of common Ecommerce-related topics, including account management, product inquiries, order processing, payment methods, shipping details, and general platform usage.
Use Cases
Chatbot Development: This dataset can be used to train and fine-tune chatbot models for an Ecommerce chatbot capable of handling various customer queries and providing relevant responses.
Natural Language Processing (NLP) Research: Researchers can utilize this dataset to study language understanding, response generation, and conversation flow in the context of Ecommerce interactions.
Customer Support Automation: Ecommerce businesses can explore the possibility of implementing a chatbot-based customer support system to enhance customer satisfaction and reduce response times.
Data Format
The dataset is provided in a JSON format, where each item contains a "question" field and an "answer" field. The data is easily accessible and can be integrated into various machine learning frameworks for training purposes.
Dataset Citation
If you use this dataset in your research or project, kindly cite it as follows:
```
@dataset{saadmakhdoom/ecommerce-faq-chatbot-dataset,
title = {Ecommerce FAQ Chatbot Dataset},
author = {Saad Makhdoom},
year = {Year of Dataset Creation},
publisher = {Kaggle},
url = {https://www.kaggle.com/datasets/saadmakhdoom/ecommerce-faq-chatbot-dataset}
}
```
Acknowledgments
We would like to express our gratitude to Saad Makhdoom for creating and sharing this valuable dataset on Kaggle.
Their efforts in curating and providing the data have contributed significantly to the advancement of chatbot research and development. |
elsaEU/ELSA1M_track1 | 2023-08-27T08:01:57.000Z | [
"license:cc-by-4.0",
"region:us"
] | elsaEU | null | null | null | 2 | 22 | ---
elsaEU--ELSA1M_track1:
description: ''
citation: ''
homepage: ''
license: ''
features:
image:
decode: true
id: null
dtype: Image
id:
dtype: string
id: null
_type: Value
original_prompt:
dtype: string
id: null
_type: Value
positive_prompt:
dtype: string
id: null
_type: Value
negative_prompt:
dtype: string
id: null
_type: Value
model:
dtype: string
id: null
_type: Value
nsfw:
dtype: string
id: null
_type: Value
url_real_image:
dtype: string
id: null
_type: Value
filepath:
dtype: string
id: null
_type: Value
aspect_ratio:
feature:
dtype: int64
id: null
_type: Value
length: -1
id: null
_type: Sequence
post_processed: null
supervised_keys: null
task_templates: null
builder_name: imagefolder
config_name: default
version:
version_str: 0.0.0
description: null
major: 0
minor: 0
patch: 0
splits:
train:
name: train
num_bytes: 445926712527.43
num_examples: 992655
dataset_name: ELSA1M_track1
download_checksums: null
download_size: 223034360161
post_processing_size: null
dataset_size: 445926712527.43
size_in_bytes: 668961072688.4299
license: cc-by-4.0
---
# ELSA - Multimedia use case

**ELSA Multimedia is a large collection of Deep Fake images, generated using diffusion models**
### Dataset Summary
This dataset was developed as part of the EU project ELSA. Specifically for the Multimedia use-case.
Official webpage: https://benchmarks.elsa-ai.eu/
This dataset aims to develop effective solutions for detecting and mitigating the spread of deep fake images in multimedia content. Deep fake images, which are highly realistic and deceptive manipulations, pose significant risks to privacy, security, and trust in digital media. This dataset can be used to train robust and accurate models that can identify and flag instances of deep fake images.
### ELSA versions
| Name | Description | Link |
| ------------- | ------------- | ---------------------|
| ELSA1M_track1 | Dataset of 1M images generated using diffusion model | https://huggingface.co/datasets/elsaEU/ELSA1M_track1 |
| ELSA500k_track2 | Dataset of 500k images generated using diffusion model with diffusion attentive attribution maps [1] | https://huggingface.co/datasets/elsaEU/ELSA500k_track2 |
```python
from datasets import load_dataset
elsa_data = load_dataset("elsaEU/ELSA1M_track1", split="train", streaming=True)
for sample in elsa_data:
image = sample.pop("image")
metadata = sample
```
Using <a href="https://huggingface.co/docs/datasets/stream">streaming=True</a> lets you work with the dataset without downloading it.
## Dataset Structure
Each parquet file contains nearly 1k images and a JSON file with metadata.
The Metadata for generated images are:
- ID: Laion image ID
- original_prompt: Laion Prompt
- positive_prompt: positive prompt used for image generation
- negative_prompt: negative prompt used for image generation
- model: model used for the image generation
- nsfw: nsfw tag from Laion
- url_real_image: Url of the real image associated to the same prompt
- filepath: filepath of the fake image
- aspect_ratio: aspect ratio of the generated image
### Dataset Curators
- Leonardo Labs (rosario.dicarlo.ext@leonardo.com)
- UNIMORE (https://aimagelab.ing.unimore.it/imagelab/) |
ITNovaML/invoices-donut-data-v1 | 2023-08-14T07:17:27.000Z | [
"task_categories:feature-extraction",
"language:en",
"region:us"
] | ITNovaML | null | null | null | 4 | 22 | ---
task_categories:
- feature-extraction
language:
- en
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 235013794.0
num_examples: 426
- name: validation
num_bytes: 26678659.0
num_examples: 50
- name: test
num_bytes: 15053216.0
num_examples: 26
download_size: 197949185
dataset_size: 276745669.0
---
|
Gaoj124/textbook_and_PMC_test | 2023-07-23T12:16:07.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:feature-extraction",
"size_categories:10B<n<100B",
"language:en",
"license:openrail",
"medical",
"region:us"
] | Gaoj124 | null | null | null | 1 | 22 | ---
license: openrail
task_categories:
- text-generation
- conversational
- feature-extraction
language:
- en
tags:
- medical
pretty_name: textbook_and_PMC_test
size_categories:
- 10B<n<100B
dataset_info:
features:
- name: input_text
dtype: int64
- name: target_text
dtype: int64
splits:
- name: train
num_bytes: 8192
num_examples: 512
download_size: 4496
dataset_size: 8192
---
|
youssef101/artelingo | 2023-09-11T08:21:07.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:text-to-image",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
... | youssef101 | ArtELingo is a benchmark and dataset having a collection of 80,000 artworks from WikiArt with 1.2 Million annotations in English, Arabic, and Chinese. | @inproceedings{mohamed2022artelingo,
title={ArtELingo: A Million Emotion Annotations of WikiArt with Emphasis on Diversity over Language and Culture},
author={Mohamed, Youssef and Abdelfattah, Mohamed and Alhuwaider, Shyma and Li, Feifan and Zhang, Xiangliang and Church, Kenneth and Elhoseiny, Mohamed},
booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
pages={8770--8785},
year={2022}
} | null | 2 | 22 | ---
license: other
task_categories:
- text-generation
- text-classification
- image-classification
- image-to-text
- text-to-image
language:
- en
- ar
- zh
tags:
- art
- Affective Captioning
- Emotions
- Emotion Prediction
- Image Captioning
- Multilingual
- Cultural
- Diversity
pretty_name: ArtELingo
size_categories:
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
multilinguality:
- multilingual
source_datasets:
- original
---
# Dataset Card for "ArtELingo"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Dataset Configurations](#dataset-configurations)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [artelingo.org/](https://www.artelingo.org/)
- **Repository:** [More Information Needed](https://github.com/Vision-CAIR/artelingo)
- **Paper:** [More Information Needed](https://arxiv.org/abs/2211.10780)
- **Point of Contact:** [More Information Needed](artelingo.dataset@gmail.com)
### Dataset Summary
ArtELingo is a benchmark and dataset introduced in a research paper aimed at promoting work on diversity across languages and cultures.
It is an extension of ArtEmis, which is a collection of 80,000 artworks from WikiArt with 450,000 emotion labels and English-only captions.
ArtELingo expands this dataset by adding 790,000 annotations in Arabic and Chinese.
The purpose of these additional annotations is to evaluate the performance of "cultural-transfer" in AI systems.
The goal of ArtELingo is to encourage research on multilinguality and culturally-aware AI.
By including annotations in multiple languages and considering cultural differences,
the dataset aims to build more human-compatible AI that is sensitive to emotional nuances
across various cultural contexts. The researchers believe that studying emotions in this
way is crucial to understanding a significant aspect of human intelligence.
### Supported Tasks and Leaderboards
We have two tasks:
- [Emotion Label Prediction](https://eval.ai/web/challenges/challenge-page/2106/overview)
- [Affective Image Captioning](https://eval.ai/web/challenges/challenge-page/2104/overview)
Both challenges have a leaderboard on Eval.ai. Submission deadlines can be viewed from the above links.
In addition, we are hosting the challenge at the ICCV23 workshop [WECIA](https://iccv23-wecia.github.io/). We have cash prizes for winners.
### Languages
We have 3 languages: English, Arabic, and Chinese. For each image, we have at least 5 captions in each language.
In total we have 80,000 images which are downloaded automatically with the dataset.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Dataset Configurations
We have 4 Configurations:
#### artelingo
- **Size of downloaded dataset files:** 23 GB
- **Splits:** \['train', 'test', 'val'\]
- **Number of Samples per splits:** \[920K, 94.1K, 46.9K\]
- **Loading Script**:
```python
from datasets import load_dataset
dataset = load_dataset(path="youssef101/artelingo", name='artelingo')
```
you can also provide a `splits:LIST(str)` parameter to avoid downloading the huge files for all the splits. (especially the train set :))
```python
from datasets import load_dataset
dataset = load_dataset(path="youssef101/artelingo", name='artelingo', splits=['val'])
```
Notice that this deems the next dev configuration redundant.
#### dev
- **Size of downloaded dataset files:** 3 GB
- **Splits:** \['test', 'val'\]
- **Number of Samples per splits:** \[94.1K, 46.9K\]
- **Loading Script**:
```python
from datasets import load_dataset
dataset = load_dataset(path="youssef101/artelingo", name='dev')
```
#### wecia-emo
Intended for the [WECIA](https://iccv23-wecia.github.io/) emotion prediction challenge. Instances does not have the emotion or the language attributes.
- **Size of downloaded dataset files:** 1.2 GB
- **Splits:** \['dev'\]
- **Number of Samples per splits:** \[27.9K\]
- **Loading Script**:
```python
from datasets import load_dataset
dataset = load_dataset(path="youssef101/artelingo", name='wecia-emo')
```
#### wecia-cap
Intended for the [WECIA](https://iccv23-wecia.github.io/) affective caption generation challenge. Instances does not have the text.
- **Size of downloaded dataset files:** 1.2 GB
- **Splits:** \['dev'\]
- **Number of Samples per splits:** \[16.3K\]
- **Loading Script**:
```python
from datasets import load_dataset
dataset = load_dataset(path="youssef101/artelingo", name='wecia-cap')
```
### Data Fields
The data fields are the same among all configs.
- `uid`: a `int32` feature. A unique identifier for each instance.
- `image`: a `PIL.Image` feature. The image of the artwork from the wikiart dataset.
- `art_style`: a `string` feature. The art style of the artwork. Styles are a subset from the [wikiart styles](https://www.wikiart.org/en/paintings-by-style).
- `painting`: a `string` feature. The name of the painting according to the wikiart dataset.
- `emotion`: a `string` feature. The emotion associated with the image caption pair.
- `language`: a `string` feature. The language used to write the caption.
- `text`: a `string` feature. The affective caption that describes the painting under the context of the selected emotion.
## Dataset Creation
### Curation Rationale
ArtELingo is a benchmark and dataset designed to promote research on diversity
across languages and cultures. It builds upon ArtEmis, a collection of 80,000
artworks from WikiArt with 450,000 emotion labels and English-only captions.
ArtELingo extends this dataset by adding 790,000 annotations in Arabic and
Chinese, as well as 4,800 annotations in Spanish, allowing for the evaluation
of "cultural-transfer" performance in AI systems. With many artworks having
multiple annotations in three languages, the dataset enables the investigation
of similarities and differences across linguistic and cultural contexts.
Additionally, ArtELingo explores captioning tasks, demonstrating how diversity
in annotations can improve the performance of baseline AI models. The hope is
that ArtELingo will facilitate future research on multilinguality and
culturally-aware AI. The dataset is publicly available, including standard
splits and baseline models, to support and ease further research in this area.
### Source Data
#### Initial Data Collection and Normalization
ArtELingo uses images from the [wikiart dataset](https://www.wikiart.org/).
The images are mainly artworks since they are created with the intention to
have an emotional impact on the viewer. ArtELingo assumes that WikiArt
is a representative sample of the cultures of interest. While WikiArt
is remarkably comprehensive, it has better coverage of the West than other
regions of the world based on WikiArt’s assignment of artworks to nationalities.
The data was collected via Amazon Mechanical Turk, where only native speakers
were allowed to annotate the images. The English, Arabic, and Chinese subsets were
collected by 6377, 656, and 745 workers respectively. All workers were compensated
with above minimal wage in each respective country.
#### Who are the source language producers?
The data comes from Human annotators who natively speak each respective language.
## Considerations for Using the Data
### Social Impact of Dataset
When using the ArtELingo dataset, researchers and developers must be mindful of
the potential social impact of the data. Emotions, cultural expressions, and
artistic representations can be sensitive topics, and AI systems trained on such
data may have implications on how they perceive and respond to users. It is
crucial to ensure that the dataset's usage does not perpetuate stereotypes or
biases related to specific cultures or languages. Ethical considerations should
be taken into account during the development and deployment of AI models trained
on ArtELingo to avoid any harmful consequences on individuals or communities.
### Discussion of Biases
ArtELingo was filtered against hate speech, racism, and obvious stereotypes.
However, Like any dataset, ArtELingo may contain inherent biases that could
influence the performance and behavior of AI systems. These biases could
arise from various sources, such as cultural differences in emotional
interpretations, variations in annotator perspectives, or imbalances in
the distribution of annotations across languages and cultures. Researchers
should be cautious about potential biases that might impact the dataset's
outcomes and address them appropriately. Transparently discussing and
documenting these biases is essential to facilitate a fair understanding of the
dataset's limitations and potential areas of improvement.
## Additional Information
### Dataset Curators
The corpus was put together by [Youssef Mohamed](https://cemse.kaust.edu.sa/people/person/youssef-s-mohamed),
[Mohamed Abdelfattah](https://people.epfl.ch/mohamed.abdelfattah/?lang=en),
[Shyma Alhuwaider](https://cemse.kaust.edu.sa/aanslab/people/person/shyma-y-alhuwaider),
[Feifan Li](https://www.linkedin.com/in/feifan-li-3280a6249/),
[Xiangliang Zhang](https://engineering.nd.edu/faculty/xiangliang-zhang/),
[Kenneth Ward Church](https://www.khoury.northeastern.edu/people/kenneth-church/)
and [Mohamed Elhoseiny](https://cemse.kaust.edu.sa/people/person/mohamed-elhoseiny).
### Licensing Information
Terms of Use: Before we are able to offer you access to the database,
please agree to the following terms of use. After approval, you (the 'Researcher')
receive permission to use the ArtELingo database (the 'Database') at King Abdullah
University of Science and Technology (KAUST). In exchange for being able to join the
ArtELingo community and receive such permission, Researcher hereby agrees to the
following terms and conditions: [1.] The Researcher shall use the Database only for
non-commercial research and educational purposes. [2.] The Universities make no
representations or warranties regarding the Database, including but not limited to
warranties of non-infringement or fitness for a particular purpose. [3.] Researcher
accepts full responsibility for his or her use of the Database and shall defend and
indemnify the Universities, including their employees, Trustees, officers and agents,
against any and all claims arising from Researcher's use of the Database, and
Researcher's use of any copies of copyrighted 2D artworks originally uploaded to
http://www.wikiart.org that the Researcher may use in connection with the Database.
[4.] Researcher may provide research associates and colleagues with access to the
Database provided that they first agree to be bound by these terms and conditions.
[5.] The Universities reserve the right to terminate Researcher's access to the Database
at any time. [6.] If Researcher is employed by a for-profit, commercial entity,
Researcher's employer shall also be bound by these terms and conditions, and Researcher
hereby represents that he or she is fully authorized to enter into this agreement on
behalf of such employer. [7.] The international copyright laws shall apply to all
disputes under this agreement.
### Citation Information
```
@inproceedings{mohamed2022artelingo,
title={ArtELingo: A Million Emotion Annotations of WikiArt with Emphasis on Diversity over Language and Culture},
author={Mohamed, Youssef and Abdelfattah, Mohamed and Alhuwaider, Shyma and Li, Feifan and Zhang, Xiangliang and Church, Kenneth and Elhoseiny, Mohamed},
booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
pages={8770--8785},
year={2022}
}
```
### Contributions
Thanks to [@youssef101](https://github.com/Mo-youssef) for adding this dataset. [@Faizan](https://faixan-khan.github.io/) for testing. |
gwlms/germeval2018 | 2023-07-26T11:05:10.000Z | [
"task_categories:text-classification",
"language:de",
"license:cc-by-4.0",
"region:us"
] | gwlms | # Task Description
Participants were allowed to participate in one or
both tasks and submit at most three runs per task.
## Task 1: Coarse-grained Binary Classification
Task 1 was to decide whether a tweet includes some
form of offensive language or not. The tweets had
to be classified into the two classes OFFENSE and
OTHER. The OFFENSE category covered abusive
language, insults, as well as merely profane statements.
## Task 2: Fine-grained 4-way Classification
The second task involved four categories, a nonoffensive OTHER class and three sub-categories of what is OFFENSE in
Task 1. In the case of PROFANITY, profane words are used, however, the tweet does not want to insult anyone. This
typically concerns the usage of swearwords (Scheiße, Fuck etc.) and cursing (Zur Holle! Verdammt! etc.). This can be
often found in youth language. Swearwords and cursing may, but need not, co-occur with insults or abusive speech.
Profane language may in fact be used in tweets with positive sentiment to express emphasis. Whenever profane words are
not directed towards a specific person or group of persons and there are no separate cues of INSULT or ABUSE, then
tweets are labeled as simple cases of PROFANITY.
In the case of INSULT, unlike PROFANITY, the tweet clearly wants to offend someone. INSULT is the ascription of
negatively evaluated qualities or deficiencies or the labeling of persons as unworthy (in some sense) or unvalued.
Insults convey disrespect and contempt. Whether an utterance is an insult usually depends on the community in which it
is made, on the social context (ongoing activity etc.) in which it is made, and on the linguistic means that are used
(which have to be found to be conventional means whose assessment as insulting are intersubjectively reasonably
stable).
And finally, in the case of ABUSE, the tweet does not just insult a person but represents the stronger form of abusive
language. By abuse we define a special type of degradation. This type of degrading consists in ascribing a social
identity to a person that is judged negatively by a (perceived) majority of society. The identity in question is seen
as a shameful, unworthy, morally objectionable or marginal identity. In contrast to insults, instances of abusive
language require that the target of judgment is seen as a representative of a group and it is ascribed negative
qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group. (This part of the
definition largely co-incides with what is referred to as abusive speech in other research.) Aside from the cases where
people are degraded based on their membership in some group, we also classify it as abusive language when
dehumanization is employed even just towards an individual (i.e. describing a person as scum or vermin etc.). | @incollection{WiegandSiegelRuppenhofer2019,
author = {Michael Wiegand and Melanie Siegel and Josef Ruppenhofer},
title = {Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language},
series = {Proceedings of GermEval 2018, 14th Conference on Natural Language Processing (KONVENS 2018), Vienna, Austria – September 21, 2018},
editor = {Josef Ruppenhofer and Melanie Siegel and Michael Wiegand},
publisher = {Austrian Academy of Sciences},
address = {Vienna, Austria},
isbn = {978-3-7001-8435-5},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-84935},
pages = {1 -- 10},
year = {2019},
abstract = {We present the pilot edition of the GermEval Shared Task on the Identification of Offensive Language. This shared task deals with the classification of German tweets from Twitter. It comprises two tasks, a coarse-grained binary classification task and a fine-grained multi-class classification task. The shared task had 20 participants submitting 51 runs for the coarse-grained task and 25 runs for the fine-grained task. Since this is a pilot task, we describe the process of extracting the raw-data for the data collection and the annotation schema. We evaluate the results of the systems submitted to the shared task. The shared task homepage can be found at https://projects.cai. fbi.h-da.de/iggsa/},
language = {en}
} | null | 0 | 22 | ---
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
- name: coarse-grained
dtype: string
- name: fine-grained
dtype: string
config_name: germeval2018
splits:
- name: train
num_bytes: 840593
num_examples: 5009
- name: test
num_bytes: 519146
num_examples: 3532
download_size: 1282870
dataset_size: 1359739
task_categories:
- text-classification
language:
- de
--- |
zhengxuanzenwu/ms-macro-wellformed_only | 2023-07-26T22:50:25.000Z | [
"region:us"
] | zhengxuanzenwu | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: answers
sequence: string
- name: passages
sequence:
- name: is_selected
dtype: int32
- name: passage_text
dtype: string
- name: url
dtype: string
- name: query
dtype: string
- name: query_id
dtype: int32
- name: query_type
dtype: string
- name: wellFormedAnswers
sequence: string
splits:
- name: train
num_bytes: 658216533.1439316
num_examples: 153725
- name: test
num_bytes: 51026409.399810076
num_examples: 12467
download_size: 355892442
dataset_size: 709242942.5437417
---
# Dataset Card for "ms-macro-wellformed_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_01 | 2023-08-01T17:59:28.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 15176523.9300594
num_examples: 28224
download_size: 9592708
dataset_size: 15176523.9300594
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PL-MTEB/polemo2_in | 2023-08-11T12:40:43.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 22 | ---
license: cc-by-nc-sa-4.0
---
|
PL-MTEB/polemo2_out | 2023-08-11T12:42:58.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 22 | ---
license: cc-by-nc-sa-4.0
---
|
PL-MTEB/allegro-reviews | 2023-08-11T13:11:54.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 22 | ---
license: cc-by-sa-4.0
---
|
imvladikon/QAmeleon | 2023-08-13T19:36:48.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:ar",
"language:bn",
"language:fi",
"language:id",
"language:ko",
"language:ru",
"language:sw",
"language:te",
"license:cc-by-4.0",
"arxiv:2211.08264",
"region:us"
] | imvladikon | null | null | null | 0 | 22 | ---
language:
- ar
- bn
- fi
- id
- ko
- ru
- sw
- te
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- question-answering
dataset_info:
- config_name: ar
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 4773335
num_examples: 6966
download_size: 0
dataset_size: 4773335
- config_name: bn
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 6458441
num_examples: 6084
download_size: 0
dataset_size: 6458441
- config_name: default
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 32190633
num_examples: 47173
download_size: 16811173
dataset_size: 32190633
- config_name: fi
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 2158030
num_examples: 5028
download_size: 0
dataset_size: 2158030
- config_name: id
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 2635540
num_examples: 6797
download_size: 0
dataset_size: 2635540
- config_name: ko
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 5074624
num_examples: 6471
download_size: 0
dataset_size: 5074624
- config_name: ru
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 3952632
num_examples: 5557
download_size: 0
dataset_size: 3952632
- config_name: sw
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 2113909
num_examples: 5597
download_size: 0
dataset_size: 2113909
- config_name: te
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 5024122
num_examples: 4673
download_size: 0
dataset_size: 5024122
configs:
- config_name: ar
data_files:
- split: train
path: ar/train-*
- config_name: bn
data_files:
- split: train
path: bn/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: fi
data_files:
- split: train
path: fi/train-*
- config_name: id
data_files:
- split: train
path: id/train-*
- config_name: ko
data_files:
- split: train
path: ko/train-*
- config_name: ru
data_files:
- split: train
path: ru/train-*
- config_name: sw
data_files:
- split: train
path: sw/train-*
- config_name: te
data_files:
- split: train
path: te/train-*
---
# Dataset Card for "QAmeleon"
QAmeleon introduces synthetic multilingual QA data contaning in 8 langauges using PaLM-540B, a large language model. This dataset was generated by prompt tuning PaLM with only five examples per language. We use the synthetic data to finetune downstream QA models leading to improved accuracy in comparison to English-only and translation-based baselines.
Data available at https://storage.googleapis.com/qameleon/qamelon_pt_accepted.csv
More details can be found in the [QAmeleon: Multilingual QA with Only 5 Examples](https://arxiv.org/abs/2211.08264) which can be cited as follows:
```
@misc{agrawal2022qameleon,
title={QAmeleon: Multilingual QA with Only 5 Examples},
author={Priyanka Agrawal and Chris Alberti and Fantine Huot and Joshua Maynez and Ji Ma and Sebastian Ruder and Kuzman Ganchev and Dipanjan Das and Mirella Lapata},
year={2022},
eprint={2211.08264},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
This dataset contains a total of 47173 Question Answer instances across 8 langauges, following is the count per language.
|Language | Count |
|---------|------:|
|ar |6966 |
|bn |6084 |
|fi |5028 |
|id |6797 |
|ko |6471 |
|ru |5557 |
|sw |5597 |
|te |4673 |
|**Total** |**47173**|
The QAmeleon dataset is released under the [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amitness/logits-maltese-512 | 2023-09-21T19:30:04.000Z | [
"region:us"
] | amitness | null | null | null | 0 | 22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: teacher_logits
sequence:
sequence: float64
- name: teacher_indices
sequence:
sequence: int64
- name: teacher_mask_indices
sequence: int64
splits:
- name: train
num_bytes: 230200052
num_examples: 12655
download_size: 84312982
dataset_size: 230200052
---
# Dataset Card for "logits-maltese-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
realzdlegend/breast_cancer_xray | 2023-08-15T20:13:18.000Z | [
"task_categories:image-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"medical",
"region:us"
] | realzdlegend | null | null | null | 0 | 22 | ---
license: cc
task_categories:
- image-classification
language:
- en
tags:
- medical
pretty_name: breast_xray
size_categories:
- n<1K
configs:
- config_name: realzdlegend--breast_cancer_xray
--- |
SamiA1234/datasetEdited.txt | 2023-09-02T15:42:22.000Z | [
"license:wtfpl",
"region:us"
] | SamiA1234 | null | null | null | 0 | 22 | ---
license: wtfpl
---
|
Kris8an/Llama_2_wring | 2023-09-24T21:16:09.000Z | [
"task_categories:question-answering",
"language:en",
"code",
"region:us"
] | Kris8an | null | null | null | 0 | 22 | ---
task_categories:
- question-answering
language:
- en
tags:
- code
--- |
Suchinthana/Databricks-Dolly-15k-si-en-mix | 2023-10-02T14:30:04.000Z | [
"language:si",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | Suchinthana | null | null | null | 0 | 22 | ---
license: cc-by-sa-3.0
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 41110595
num_examples: 30022
download_size: 20098720
dataset_size: 41110595
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- si
- en
--- |
dongyoung4091/shp-generated_flan_t5_large_external_rm1_large | 2023-09-10T04:50:46.000Z | [
"region:us"
] | dongyoung4091 | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: external_rm1
dtype: float64
splits:
- name: train
num_bytes: 27036265
num_examples: 25600
download_size: 1846172
dataset_size: 27036265
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "shp-generated_flan_t5_large_external_rm1_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pietrolesci/eurlex-57k | 2023-09-11T14:32:11.000Z | [
"region:us"
] | pietrolesci | null | null | null | 0 | 22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: validation
path: embedding_all-MiniLM-L12-v2/validation-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: validation
path: embedding_all-mpnet-base-v2/validation-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: validation
path: embedding_multi-qa-mpnet-base-dot-v1/validation-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
- config_name: eurovoc_concepts
data_files:
- split: train
path: eurovoc_concepts/train-*
dataset_info:
- config_name: default
features:
- name: celex_id
dtype: string
- name: document_type
dtype: string
- name: title
dtype: string
- name: header
dtype: string
- name: recitals
dtype: string
- name: main_body
sequence: string
- name: eurovoc_concepts
sequence: string
- name: text
dtype: string
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 269684150
num_examples: 45000
- name: validation
num_bytes: 35266624
num_examples: 6000
- name: test
num_bytes: 35621361
num_examples: 6000
download_size: 0
dataset_size: 340572135
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 69660000
num_examples: 45000
- name: validation
num_bytes: 9288000
num_examples: 6000
- name: test
num_bytes: 9288000
num_examples: 6000
download_size: 123441408
dataset_size: 88236000
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 138780000
num_examples: 45000
- name: validation
num_bytes: 18504000
num_examples: 6000
- name: test
num_bytes: 18504000
num_examples: 6000
download_size: 211031101
dataset_size: 175788000
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 138780000
num_examples: 45000
- name: validation
num_bytes: 18504000
num_examples: 6000
- name: test
num_bytes: 18504000
num_examples: 6000
download_size: 211029593
dataset_size: 175788000
- config_name: eurovoc_concepts
features:
- name: concept_id
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 205049
num_examples: 7201
download_size: 157326
dataset_size: 205049
---
# Dataset Card for "eurlex-57k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Photolens/DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted-merged-with-MedText | 2023-09-11T16:03:39.000Z | [
"region:us"
] | Photolens | null | null | null | 2 | 22 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 23407332
num_examples: 51332
download_size: 9565869
dataset_size: 23407332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted-merged-with-MedText"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/704dc3cf | 2023-09-12T11:24:46.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1340
dataset_size: 182
---
# Dataset Card for "704dc3cf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/SkunkData-002-2-convid-cluster | 2023-09-15T02:05:53.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: unique_conversation_id
dtype: string
- name: cluster
dtype: int32
splits:
- name: train
num_bytes: 89257780
num_examples: 1472917
download_size: 17951475
dataset_size: 89257780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SkunkData-002-2-convid-cluster"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Otter-AI/MMBench | 2023-10-08T14:23:37.000Z | [
"license:apache-2.0",
"region:us"
] | Otter-AI | MMBench is collected from multiple sources, including public datasets and Internet, and currently, contains 2974 multiple-choice questions, covering 20 ability dimensions. We structure the existing 20 ability dimensions into 3 ability dimension levels, from L-1 to L-3. we incorporate Perception and Reasoning as our top-level ability dimensions in our ability taxonomy, referred to as L-1 ability dimension. For L-2 abilities, we derive: 1. Coarse Perception, 2. Fine-grained Single-instance Perception, 3. Fine-grained Cross-instance Perception from L-1 Perception; and 1. Attribute Reasoning, 2. Relation Reasoning, 3. Logic Reasoning from L-1 Reasoning. To make our benchmark as fine-grained as possible to produce informative feedbacks for developing multi-modality models. We further derive L-3 ability dimensions from L-2 ones. To the best of our knowledge, MMBench is the first large-scale evaluation multimodal dataset covering so many ability dimensions. | @article{MMBench,
author = {Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhnag, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin},
journal = {arXiv:2307.06281},
title = {MMBench: Is Your Multi-modal Model an All-around Player?},
year = {2023},
} | null | 1 | 22 | ---
license: apache-2.0
---
|
Solaren/midj-clean | 2023-09-15T15:48:21.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | Solaren | null | null | null | 0 | 22 | ---
license: creativeml-openrail-m
---
|
YL95/FXFpML | 2023-09-15T15:35:08.000Z | [
"region:us"
] | YL95 | null | null | null | 0 | 22 | Entry not found |
dominguesm/CC-MAIN-2023-23 | 2023-09-17T00:02:06.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:10B<n<100B",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | dominguesm | null | null | null | 1 | 22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: crawl_timestamp
dtype: timestamp[ns, tz=UTC]
splits:
- name: train
num_bytes: 97584560119
num_examples: 16899389
download_size: 18490153155
dataset_size: 97584560119
license: cc-by-4.0
task_categories:
- text-generation
- fill-mask
language:
- pt
pretty_name: CC-MAIN-2023-23-PT
size_categories:
- 10B<n<100B
---
# Dataset Card for "CC-MAIN-2023-23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jmelsbach/real-estate-instructions-small | 2023-09-17T17:57:59.000Z | [
"region:us"
] | jmelsbach | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 951120
num_examples: 500
download_size: 469994
dataset_size: 951120
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "real-estate-instructions-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Coconuty/FairyTale003 | 2023-09-18T15:37:33.000Z | [
"region:us"
] | Coconuty | null | null | null | 0 | 22 | Entry not found |
TinyPixel/elm | 2023-10-06T04:29:39.000Z | [
"region:us"
] | TinyPixel | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2577268
num_examples: 1073
download_size: 1393303
dataset_size: 2577268
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "elm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zxvix/c4_counterfactual_3 | 2023-09-19T03:38:25.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 22 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 3490614.435
num_examples: 985
download_size: 2246810
dataset_size: 3490614.435
---
# Dataset Card for "c4_counterfactual_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jeanai4/legoset | 2023-09-20T09:21:27.000Z | [
"region:us"
] | jeanai4 | null | null | null | 0 | 22 | Entry not found |
liyucheng/allsides | 2023-09-21T22:01:54.000Z | [
"region:us"
] | liyucheng | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: topic
dtype: string
- name: camp
dtype: string
- name: full_stories
dtype: string
- name: articles
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4499065
num_examples: 987
download_size: 2363071
dataset_size: 4499065
---
# Dataset Card for "allsides"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/family_lifestyle_photography | 2023-09-21T07:22:22.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1039539
num_examples: 10000
download_size: 22749
dataset_size: 1039539
---
# Dataset Card for "family_lifestyle_photography"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rohdimp24/bizomData | 2023-09-25T04:36:34.000Z | [
"region:us"
] | rohdimp24 | null | null | null | 0 | 22 | Entry not found |
sankettgorey/donut_4 | 2023-09-24T17:17:44.000Z | [
"region:us"
] | sankettgorey | null | null | null | 0 | 22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 168398938.91680533
num_examples: 540
- name: validation
num_bytes: 8579406.106489185
num_examples: 30
- name: test
num_bytes: 9378162.976705492
num_examples: 31
download_size: 13900408
dataset_size: 186356508.0
---
# Dataset Card for "donut_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/bugurt_completion_prompts_8k | 2023-09-25T15:39:49.000Z | [
"region:us"
] | dim | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: bugurt
dtype: string
splits:
- name: train
num_bytes: 9139097
num_examples: 8360
download_size: 4667499
dataset_size: 9139097
---
# Dataset Card for "bugurt_completion_prompts_8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FanChen0116/bus_few4_50x_pvi | 2023-09-26T20:31:19.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-from_location
'2': B-from_location
'3': B-leaving_date
'4': I-leaving_date
'5': I-to_location
'6': B-to_location
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 431503
num_examples: 1750
- name: validation
num_bytes: 6900
num_examples: 35
- name: test
num_bytes: 70618
num_examples: 377
download_size: 54596
dataset_size: 509021
---
# Dataset Card for "bus_few4_50x_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
abdullahmeda/bond | 2023-09-26T07:40:48.000Z | [
"region:us"
] | abdullahmeda | null | null | null | 0 | 22 | Entry not found |
DanArnin/Hinglish2 | 2023-09-27T05:24:38.000Z | [
"region:us"
] | DanArnin | null | null | null | 0 | 22 | Entry not found |
Photolens/airoboros-2.1-no-code | 2023-09-30T19:41:56.000Z | [
"license:apache-2.0",
"region:us"
] | Photolens | null | null | null | 1 | 22 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 68529633
num_examples: 36306
download_size: 33187086
dataset_size: 68529633
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
--- |
ashiyakatuka11/corpusGen_dataset | 2023-10-03T12:01:25.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | null | 0 | 22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: ' Session ID '
dtype: int64
- name: ' Speaker '
dtype: string
- name: ' Utterance_clean'
dtype: string
- name: TAG
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: Utterance
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 251021
num_examples: 1017
- name: test
num_bytes: 64519
num_examples: 255
download_size: 143048
dataset_size: 315540
---
# Dataset Card for "corpusGen_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Akash092003/ABSA-alpaca-SemEval2014Task4 | 2023-09-28T18:25:38.000Z | [
"size_categories:1K<n<10K",
"language:en",
"absa",
"region:us"
] | Akash092003 | null | null | null | 0 | 22 | ---
language:
- en
pretty_name: ABSA
size_categories:
- 1K<n<10K
configs:
- config_name: laptops
data_files:
- split: train
path: laptops/train.json
- split: test
path: laptops/test.json
- split: trial
path: laptops/trial.json
- config_name: restaurants
data_files:
- split: train
path: restaurants/train.json
- split: test
path: restaurants/test.json
- split: trial
path: restaurants/trial.json
tags:
- absa
--- |
japanese-denim/naga-eng | 2023-09-29T01:36:09.000Z | [
"license:mit",
"region:us"
] | japanese-denim | null | null | null | 0 | 22 | ---
license: mit
---
|
AlekseyKorshuk/rl-bench-test | 2023-10-03T18:14:07.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: user_name
dtype: string
- name: bot_name
dtype: string
- name: memory
dtype: string
- name: prompt
dtype: string
- name: chat_history
list:
- name: message
dtype: string
- name: sender
dtype: string
splits:
- name: train
num_bytes: 1657185
num_examples: 240
download_size: 491605
dataset_size: 1657185
---
# Dataset Card for "rl-bench-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Rithik28/TM_Dataset | 2023-10-05T11:22:36.000Z | [
"region:us"
] | Rithik28 | null | null | null | 0 | 22 | Entry not found |
Yang-hugging-face-2023/llama2-refining-1 | 2023-10-02T14:57:26.000Z | [
"region:us"
] | Yang-hugging-face-2023 | null | null | null | 0 | 22 | |
sasakits/dhoi | 2023-10-02T02:31:54.000Z | [
"license:mit",
"region:us"
] | sasakits | null | null | null | 0 | 22 | ---
license: mit
---
|
cmalaviya/expertqa | 2023-10-07T05:07:10.000Z | [
"task_categories:question-answering",
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2309.07852",
"region:us"
] | cmalaviya | null | null | null | 7 | 22 | ---
configs:
- config_name: main
data_files: r2_compiled_anon_fixed.jsonl
- config_name: lfqa_random
data_files:
- split: train
path: rand_lfqa_train.json
- split: test
path: rand_lfqa_test.json
- split: validation
path: rand_lfqa_val.json
- config_name: lfqa_domain
data_files:
- split: train
path: domain_lfqa_train.json
- split: test
path: domain_lfqa_test.json
- split: validation
path: domain_lfqa_val.json
license: mit
task_categories:
- question-answering
language:
- en
source_datasets:
- original
pretty_name: ExpertQA
annotations_creators:
- expert-generated
size_categories:
- 1K<n<10K
---
# Dataset Card for ExpertQA
## Dataset Description
- **Repository: https://github.com/chaitanyamalaviya/ExpertQA**
- **Paper: https://arxiv.org/pdf/2309.07852**
- **Point of Contact: chaitanyamalaviya@gmail.com**
### Dataset Summary
We provide here the data accompanying the paper: [ExpertQA: Expert-Curated Questions and Attributed Answers](https://arxiv.org/pdf/2309.07852). The ExpertQA dataset contains 2177 examples from 32 different fields.
### Supported Tasks
The `main` data contains 2177 examples that can be used to evaluate new methods for estimating factuality and attribution, while the `lfqa_domain` and `lfqa_rand` data can be used to evaluate long-form question answering systems.
## Dataset Creation
### Curation Rationale
ExpertQA was created to evaluate factuality & attribution in language model responses to domain-specific questions, as well as evaluate long-form question answering in domain-specific settings.
### Annotation Process
Questions in ExpertQA were formulated by experts spanning 32 fields. The answers to these questions are expert-verified, model-generated answers to these questions. Each claim-evidence pair in an answer is judged by experts for various properties such as the claim’s informativeness, factuality, citeworthiness, whether the claim is supported by the evidence, and reliability of the evidence source. Further, experts revise the original claims to ensure they are factual and supported by trustworthy sources.
## Dataset Structure
### Data Instances
We provide the main data, with judgements of factuality and attribution, under the `default` subset.
The long-form QA data splits are provided at `lfqa_domain` (domain split) and `lfqa_rand` (random split).
Additional files are provided in our [GitHub repo](https://github.com/chaitanyamalaviya/ExpertQA).
### Data Fields
The main data file contains newline-separated json dictionaries with the following fields:
* `question` - Question written by an expert.
* `annotator_id` - Anonymized annotator ID of the author of the question.
* `answers` - Dict mapping model names to an Answer object. The model names can be one of `{gpt4, bing_chat, rr_sphere_gpt4, rr_gs_gpt4, post_hoc_sphere_gpt4, post_hoc_gs_gpt4}`.
* `metadata` - A dictionary with the following fields:
* `question_type` - The question type(s) separated by "|".
* `field` - The field to which the annotator belonged.
* `specific_field` - More specific field name within the broader field.
Each Answer object contains the following fields:
* `answer_string`: The answer string.
* `attribution`: List of evidences for the answer (not linked to specific claims). Note that these are only URLs, the evidence passages are stored in the Claim object -- see below.
* `claims`: List of Claim objects for the answer.
* `revised_answer_string`: Revised answer by annotator.
* `usefulness`: Usefulness of original answer marked by annotator.
* `annotation_time`: Time taken for annotating this answer.
* `annotator_id`: Anonymized annotator ID of the person who validated this answer.
Each Claim object contains the following fields:
* `claim_string`: Original claim string.
* `evidence`: List of evidences for the claim (URL+passage or URL).
* `support`: Attribution marked by annotator.
* `reason_missing_support`: Reason for missing support specified by annotator.
* `informativeness`: Informativeness of claim for the question, marked by annotator.
* `worthiness`: Worthiness of citing claim marked by annotator.
* `correctness`: Factual correctness of claim marked by annotator.
* `reliability`: Reliability of source evidence marked by annotator.
* `revised_claim`: Revised claim by annotator.
* `revised_evidence`: Revised evidence by annotator.
### Citation Information
```
@inproceedings{malaviya23expertqa,
title = {ExpertQA: Expert-Curated Questions and Attributed Answers},
author = {Chaitanya Malaviya and Subin Lee and Sihao Chen and Elizabeth Sieber and Mark Yatskar and Dan Roth},
booktitle = {arXiv},
month = {September},
year = {2023},
url = "https://arxiv.org/abs/2309.07852"
}
```
|
AayushShah/SQL_Merged_IDs_and_Text | 2023-10-05T06:26:42.000Z | [
"region:us"
] | AayushShah | null | null | null | 1 | 22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: NATURAL_LANG
dtype: string
- name: SCHEMA
dtype: string
- name: SQL
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1089459820.9581463
num_examples: 270986
- name: test
num_bytes: 121052878.04185376
num_examples: 30110
download_size: 101851785
dataset_size: 1210512699.0
---
# Dataset Card for "SQL_Merged_IDs_and_Text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
librarian-bots/dataset_abstracts | 2023-10-05T11:17:37.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"arxiv ",
"region:us"
] | librarian-bots | null | null | null | 0 | 22 | ---
dataset_info:
- config_name: annotated
features:
- name: text
dtype: string
- name: inputs
struct:
- name: abstract
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: prediction
dtype: 'null'
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: string
- name: annotation_agent
dtype: string
- name: vectors
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: metrics
struct:
- name: text_length
dtype: int64
- name: label
dtype:
class_label:
names:
'0': new_dataset
'1': no_new_dataset
splits:
- name: train
num_bytes: 302943.5751633987
num_examples: 107
- name: test
num_bytes: 130237.4248366013
num_examples: 46
download_size: 287816
dataset_size: 433181
- config_name: unlabelled
features:
- name: text
dtype: string
- name: inputs
struct:
- name: abstract
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: prediction
dtype: 'null'
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: string
- name: annotation_agent
dtype: string
- name: vectors
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: metrics
struct:
- name: text_length
dtype: int64
- name: label
dtype: string
splits:
- name: train
num_bytes: 2336582.479
num_examples: 847
download_size: 1341049
dataset_size: 2336582.479
configs:
- config_name: annotated
data_files:
- split: train
path: annotated/train-*
- split: test
path: annotated/test-*
- config_name: unlabelled
data_files:
- split: train
path: unlabelled/train-*
task_categories:
- text-classification
language:
- en
tags:
- 'arxiv '
size_categories:
- n<1K
---
# Dataset Card for "dataset_abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChaiML/seasonIII_chatAI_configurations | 2023-10-08T01:12:33.000Z | [
"region:us"
] | ChaiML | null | null | null | 0 | 22 | ---
dataset_info:
features:
- name: bot_id
dtype: string
- name: bot_label
dtype: string
- name: prompt
dtype: string
- name: memory
dtype: string
- name: first_message
dtype: string
splits:
- name: train
num_bytes: 35131193
num_examples: 35321
download_size: 23268076
dataset_size: 35131193
---
# Dataset Card for "seasonIII_chatAI_configurations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cdsc | 2023-01-25T14:27:43.000Z | [
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-nc-sa-4.0",
"sentences entailment and relatedness",
"region:us"
] | null | Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource. | @inproceedings{wroblewska2017polish,
title={Polish evaluation dataset for compositional distributional semantics models},
author={Wr{\'o}blewska, Alina and Krasnowska-Kiera{\'s}, Katarzyna},
booktitle={Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={784--792},
year={2017}
} | null | 0 | 21 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: polish-cdscorpus
pretty_name: Polish CDSCorpus
tags:
- sentences entailment and relatedness
dataset_info:
- config_name: cdsc-e
features:
- name: pair_ID
dtype: int32
- name: sentence_A
dtype: string
- name: sentence_B
dtype: string
- name: entailment_judgment
dtype:
class_label:
names:
'0': NEUTRAL
'1': CONTRADICTION
'2': ENTAILMENT
splits:
- name: train
num_bytes: 1381902
num_examples: 8000
- name: test
num_bytes: 179400
num_examples: 1000
- name: validation
num_bytes: 174662
num_examples: 1000
download_size: 376079
dataset_size: 1735964
- config_name: cdsc-r
features:
- name: pair_ID
dtype: int32
- name: sentence_A
dtype: string
- name: sentence_B
dtype: string
- name: relatedness_score
dtype: float32
splits:
- name: train
num_bytes: 1349902
num_examples: 8000
- name: test
num_bytes: 175400
num_examples: 1000
- name: validation
num_bytes: 170662
num_examples: 1000
download_size: 381525
dataset_size: 1695964
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://zil.ipipan.waw.pl/Scwad/CDSCorpus
- **Repository:**
- **Paper:**
@inproceedings{wroblewska2017polish,
title={Polish evaluation dataset for compositional distributional semantics models},
author={Wr{\'o}blewska, Alina and Krasnowska-Kiera{\'s}, Katarzyna},
booktitle={Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={784--792},
year={2017}
}
- **Leaderboard:**
https://klejbenchmark.com/leaderboard/
- **Point of Contact:**
alina@ipipan.waw.pl
### Dataset Summary
Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- pair_ID: id of sentences pairs
- sentence_A: first sentence
- sentence_B: second sentence
for cdsc-e domain:
- entailment_judgment: either 'NEUTRAL', 'CONTRADICTION' or 'ENTAILMENT'
for cdsc-r domain:
- relatedness_score: float representing a reletedness
### Data Splits
Data is splitted in train/dev/test split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.