id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
sanskrit_classic | 2022-11-03T16:07:56.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:sa",... | null | This dataset combines some of the classical Sanskrit texts. | @Misc{johnsonetal2014,
author = {Johnson, Kyle P. and Patrick Burns and John Stewart and Todd Cook},
title = {CLTK: The Classical Language Toolkit},
url = {https://github.com/cltk/cltk},
year = {2014--2020},
} | 2 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- sa
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: SanskritClassic
dataset_info:
features:
- name: text
dtype: string
config_name: combined
splits:
- name: train
num_bytes: 40299787
num_examples: 342033
download_size: 7258904
dataset_size: 40299787
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[sanskrit_classic](https://github.com/parmarsuraj99/hf_datasets/tree/master/sanskrit_classic)
- **Repository:**[GitHub](https://github.com/parmarsuraj99/hf_datasets/tree/master/sanskrit_classic)
- **Paper:**N/A
- **Leaderboard:**N/A
- **Point of Contact:**[parmarsuraj99](parmarsuraj99@gmail.com)
### Dataset Summary
A collection of classical sanskrit texts
### Supported Tasks and Leaderboards
Language modeling
### Languages
Sanskrit
## Dataset Structure
### Data Instances
{'text': 'मा कर्मफलहेतुर्भूर्मा ते सङ्गोऽस्त्वकर्मणि॥'}
### Data Fields
`text`: a line
### Data Splits
| | Train |
|-------------------|--------|
| n_instances | 342033 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@Misc{johnsonetal2014,
author = {Johnson, Kyle P. and Patrick Burns and John Stewart and Todd Cook},
title = {CLTK: The Classical Language Toolkit},
url = {https://github.com/cltk/cltk},
year = {2014--2020},
}
```
### Contributions
Thanks to [@parmarsuraj99](https://github.com/parmarsuraj99) for adding this dataset. | 3,487 | [
[
-0.0219879150390625,
-0.040313720703125,
-0.0060272216796875,
0.0156707763671875,
-0.032867431640625,
0.01480865478515625,
-0.0447998046875,
-0.0208740234375,
0.04156494140625,
0.0221099853515625,
-0.05010986328125,
-0.07830810546875,
-0.038818359375,
0.0117... |
sede | 2022-11-18T21:44:41.000Z | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2106.05006",
"arxiv:2005.02539",
"re... | null | SEDE (Stack Exchange Data Explorer) is new dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their
natural language description. It's based on a real usage of users from the Stack Exchange Data Explorer platform,
which brings complexities and challenges never seen before in any other semantic parsing dataset like
including complex nesting, dates manipulation, numeric and text manipulation, parameters, and most
importantly: under-specification and hidden-assumptions.
Paper (NLP4Prog workshop at ACL2021): https://arxiv.org/abs/2106.05006 | @misc{hazoom2021texttosql,
title={Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data},
author={Moshe Hazoom and Vibhor Malik and Ben Bogin},
year={2021},
eprint={2106.05006},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 3 | 85 | 2022-03-02T23:29:22 | ---
pretty_name: SEDE (Stack Exchange Data Explorer)
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
paperswithcode_id: sede
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- parsing
dataset_info:
features:
- name: QuerySetId
dtype: uint32
- name: Title
dtype: string
- name: Description
dtype: string
- name: QueryBody
dtype: string
- name: CreationDate
dtype: string
- name: validated
dtype: bool
config_name: sede
splits:
- name: train
num_bytes: 4410584
num_examples: 10309
- name: validation
num_bytes: 380942
num_examples: 857
- name: test
num_bytes: 386599
num_examples: 857
download_size: 6318959
dataset_size: 5178125
---
# Dataset Card for SEDE
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** https://github.com/hirupert/sede
- **Paper:** https://arxiv.org/abs/2106.05006
- **Leaderboard:** https://paperswithcode.com/sota/text-to-sql-on-sede
- **Point of Contact:** [email](moshe@hirupert.com)
### Dataset Summary
SEDE (Stack Exchange Data Explorer) is a dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their natural language description. It's based on a real usage of users from the Stack Exchange Data Explorer platform, which brings complexities and challenges never seen before in any other semantic parsing dataset like including complex nesting, dates manipulation, numeric and text manipulation, parameters, and most importantly: under-specification and hidden-assumptions.
### Supported Tasks and Leaderboards
- `parsing`: The dataset can be used to train a model for Text-to-SQL task. A Seq2Seq model (e.g. T5) can be used to solve the task. A model with more inductive-bias (e.g. a model with a grammar-based decoder) or an interactive settings for Text-to-SQL (https://arxiv.org/abs/2005.02539) can improve the results further. The model performance is measured by how high its [PCM-F1](https://arxiv.org/abs/2106.05006) score is. A [t5-large](https://huggingface.co/t5-large) achieves a [PCM-F1 of 50.6](https://arxiv.org/abs/2106.05006).
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point comprises a question title, (optionally) a description and its underlying SQL query. In addition, each sample has a unique ID (from the Stack Exchange Data Explorer), its creation date and a boolean flag named `validated` if this sample was validated to be in gold quality by humans, see the paper for full details regarding the `validated` flag.
An instance for example:
```
{
'QuerySetId':1233,
'Title':'Top 500 Askers on the site',
'Description':'A list of the top 500 askers of questions ordered by average answer score excluding community wiki closed posts.',
'QueryBody':'SELECT * FROM (\nSELECT \n TOP 500\n OwnerUserId as [User Link],\n Count(Posts.Id) AS Questions,\n CAST(AVG(CAST(Score AS float)) as numeric(6,2)) AS [Average Question Score]\nFROM\n Posts\nWHERE \n PostTypeId = 1 and CommunityOwnedDate is null and ClosedDate is null\nGROUP BY\n OwnerUserId\nORDER BY\n Count(Posts.Id) DESC\n)ORDER BY\n [Average Question Score] DESC',
'CreationDate':'2010-05-27 20:08:16',
'validated':true
}
```
### Data Fields
- QuerySetId: a unique ID coming from the Stack Exchange Data Explorer.
- Title: utterance title.
- Description: utterance description (might be empty).
- QueryBody: the underlying SQL query.
- CreationDate: when this sample was created.
- validated: `true` if this sample was validated to be in gold quality by humans.
### Data Splits
The data is split into a training, validation and test set. The validation and test set contain only samples that were validated by humans to be in gold quality.
Train Valid Test
10309 857 857
## Dataset Creation
### Curation Rationale
Most available semantic parsing datasets, comprising of pairs of natural utterances and logical forms, were collected solely for the purpose of training and evaluation of natural language understanding systems. As a result, they do not contain any of the richness and variety of natural-occurring utterances, where humans ask about data they need or are curious about. SEDE contains a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset. There is a large gap between the performance on SEDE compared to other common datasets, which leaves a room for future research for generalisation of Text-to-SQL models.
### Source Data
#### Initial Data Collection and Normalization
To introduce a realistic Text-to-SQL benchmark, we gather SQL queries together with their titles and descriptions from a naturally occurring dataset: the Stack Exchange Data Explorer. Stack Exchange is an online question & answers community, with over 3 million questions asked. However in its raw form many of the rows are duplicated or contain unusable queries or titles. The reason for this large difference between the original data size and the cleaned version is that any time that the author of the query executes it, an entry is saved to the log. To alleviate these issues, we write rule-based filters that remove bad queries/descriptions pairs with high precision. For example, we filter out examples with numbers in the description, if these numbers do not appear in the query (refer to the preprocessing script in the repository for the complete list of filters and the number of examples each of them filter). Whenever a query has multiple versions due to multiple executions, we take the last executed query which passed all filters. After this filtering step, we are left with 12,309 examples. Using these filters cleans most of the noise, but not all of it. To complete the cleaning process, we manually go over the examples in the validation and test sets, and either filter-out wrong examples or perform minimal changes to either the utterances or the queries (for example, fix a wrong textual value) to ensure that models are evaluated with correct data. The final number of all training, validation and test examples is 12,023.
#### Who are the source language producers?
The language producers are Stack Exchange Data Explorer (https://data.stackexchange.com/) users.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
All the data in the dataset is for public use.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that the release of this challenging dataset will encourage research on improving generalisation for real-world SQL prediction that will help non technical business users acquire the data they need from their company's database.
### Discussion of Biases
[N/A]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Moshe Hazoom, Vibhor Malik and Ben Bogin, during work done at Ruper.
### Licensing Information
Apache-2.0 License
### Citation Information
```
@misc{hazoom2021texttosql,
title={Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data},
author={Moshe Hazoom and Vibhor Malik and Ben Bogin},
year={2021},
eprint={2106.05006},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Hazoom](https://github.com/Hazoom) for adding this dataset. | 8,620 | [
[
-0.025634765625,
-0.06640625,
0.027099609375,
0.01201629638671875,
-0.0195770263671875,
-0.0244903564453125,
-0.01509857177734375,
-0.0263824462890625,
0.0234832763671875,
0.06658935546875,
-0.06439208984375,
-0.07305908203125,
-0.037750244140625,
0.02478027... |
sesotho_ner_corpus | 2023-01-25T14:44:09.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:st",
"license:other",
"region:us"
] | null | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | @inproceedings{sesotho_ner_corpus,
author = {M. Setaka and
Roald Eiselen},
title = {NCHLT Sesotho Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/334},
} | 0 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- st
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Sesotho NER Corpus
license_details: Creative Commons Attribution 2.5 South Africa License
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: sesotho_ner_corpus
splits:
- name: train
num_bytes: 4502576
num_examples: 9472
download_size: 30421109
dataset_size: 4502576
---
# Dataset Card for Sesotho NER Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sesotho Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/334)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za)
### Dataset Summary
The Sesotho Ner Corpus is a Sesotho dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Sesotho language. The dataset uses CoNLL shared task annotation standards.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Sesotho.
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
```
{'id': '0',
'ner_tags': [0, 0, 0, 0, 0],
'tokens': ['Morero', 'wa', 'weposaete', 'ya', 'Ditshebeletso']
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.
### Data Splits
The data was not split.
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - Sesotho.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data is based on South African government domain and was crawled from gov.za websites.
#### Who are the source language producers?
The data was produced by writers of South African government websites - gov.za
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated during the NCHLT text resource development project.
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).
See: [more information](http://www.nwu.ac.za/ctext)
### Licensing Information
The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode)
### Citation Information
```
@inproceedings{sesotho_ner_corpus,
author = {M. Setaka and
Roald Eiselen},
title = {NCHLT Sesotho Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/334},
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset. | 5,542 | [
[
-0.0272064208984375,
-0.03826904296875,
0.0037822723388671875,
0.02545166015625,
-0.03057861328125,
-0.012420654296875,
-0.0260162353515625,
-0.0299072265625,
0.056671142578125,
0.046539306640625,
-0.032073974609375,
-0.053192138671875,
-0.060821533203125,
0... |
setswana_ner_corpus | 2023-01-25T14:44:12.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:tn",
"license:other",
"region:us"
] | null | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | @inproceedings{sepedi_ner_corpus,
author = {S.S.B.M. Phakedi and
Roald Eiselen},
title = {NCHLT Setswana Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/341},
} | 0 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- tn
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Setswana NER Corpus
license_details: Creative Commons Attribution 2.5 South Africa License
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: setswana_ner_corpus
splits:
- name: train
num_bytes: 3874793
num_examples: 7944
download_size: 25905236
dataset_size: 3874793
---
# Dataset Card for Setswana NER Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Setswana Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/319)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za)
### Dataset Summary
The Setswana Ner Corpus is a Setswana dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Setswana language. The dataset uses CoNLL shared task annotation standards.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Setswana.
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
```
{'id': '0',
'ner_tags': [0, 0, 0, 0, 0],
'tokens': ['Ka', 'dinako', 'dingwe', ',', 'go']
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.
### Data Splits
The data was not split.
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - setswana.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data is based on South African government domain and was crawled from gov.za websites.
[More Information Needed]
#### Who are the source language producers?
The data was produced by writers of South African government websites - gov.za
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated during the NCHLT text resource development project.
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).
See: [more information](http://www.nwu.ac.za/ctext)
### Licensing Information
The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode)
### Citation Information
```
@inproceedings{sepedi_ner_corpus,
author = {S.S.B.M. Phakedi and
Roald Eiselen},
title = {NCHLT Setswana Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/341},
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset. | 5,552 | [
[
-0.035552978515625,
-0.03326416015625,
-0.01300048828125,
0.03265380859375,
-0.0217437744140625,
-0.0018444061279296875,
-0.03973388671875,
-0.0318603515625,
0.039520263671875,
0.05096435546875,
-0.0418701171875,
-0.05908203125,
-0.0631103515625,
0.033233642... |
siswati_ner_corpus | 2023-01-25T14:44:23.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ss",
"license:other",
"region:us"
] | null | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | @inproceedings{siswati_ner_corpus,
author = {B.B. Malangwane and
M.N. Kekana and
S.S. Sedibe and
B.C. Ndhlovu and
Roald Eiselen},
title = {NCHLT Siswati Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/346},
} | 0 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ss
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Siswati NER Corpus
license_details: Creative Commons Attribution 2.5 South Africa License
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: siswati_ner_corpus
splits:
- name: train
num_bytes: 3517151
num_examples: 10798
download_size: 21882224
dataset_size: 3517151
---
# Dataset Card for Siswati NER Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Siswati Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/346)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za)
### Dataset Summary
The Siswati Ner Corpus is a Siswati dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Siswati language. The dataset uses CoNLL shared task annotation standards.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Siswati.
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
```
{'id': '0',
'ner_tags': [0, 0, 0, 0, 0],
'tokens': ['Tinsita', 'tebantfu', ':', 'tinsita', 'tetakhamiti']
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.
### Data Splits
The data was not split.
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - siswati.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data is based on South African government domain and was crawled from gov.za websites.
#### Who are the source language producers?
The data was produced by writers of South African government websites - gov.za
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated during the NCHLT text resource development project.
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).
See: [more information](http://www.nwu.ac.za/ctext)
### Licensing Information
The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode)
### Citation Information
```
@inproceedings{siswati_ner_corpus,
author = {B.B. Malangwane and
M.N. Kekana and
S.S. Sedibe and
B.C. Ndhlovu and
Roald Eiselen},
title = {NCHLT Siswati Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/346},
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset. | 5,632 | [
[
-0.031402587890625,
-0.0280914306640625,
-0.00665283203125,
0.0259857177734375,
-0.01480865478515625,
-0.0085906982421875,
-0.0293731689453125,
-0.0267486572265625,
0.0462646484375,
0.032379150390625,
-0.048797607421875,
-0.05120849609375,
-0.06463623046875,
... |
smartdata | 2023-01-25T14:44:26.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"region:us"
] | null | DFKI SmartData Corpus is a dataset of 2598 German-language documents
which has been annotated with fine-grained geo-entities, such as streets,
stops and routes, as well as standard named entity types. It has also
been annotated with a set of 15 traffic- and industry-related n-ary
relations and events, such as Accidents, Traffic jams, Acquisitions,
and Strikes. The corpus consists of newswire texts, Twitter messages,
and traffic reports from radio stations, police and railway companies.
It allows for training and evaluating both named entity recognition
algorithms that aim for fine-grained typing of geo-entities, as well
as n-ary relation extraction systems. | @InProceedings{SCHIERSCH18.85,
author = {Martin Schiersch and Veselina Mironova and Maximilian Schmitt and Philippe Thomas and Aleksandra Gabryszak and Leonhard Hennig},
title = "{A German Corpus for Fine-Grained Named Entity Recognition and Relation Extraction of Traffic and Industry Events}",
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year = {2018},
month = {May 7-12, 2018},
address = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
isbn = {979-10-95546-00-9},
language = {english}
} | 1 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- de
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: SmartData
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-DATE
'2': I-DATE
'3': B-DISASTER_TYPE
'4': I-DISASTER_TYPE
'5': B-DISTANCE
'6': I-DISTANCE
'7': B-DURATION
'8': I-DURATION
'9': B-LOCATION
'10': I-LOCATION
'11': B-LOCATION_CITY
'12': I-LOCATION_CITY
'13': B-LOCATION_ROUTE
'14': I-LOCATION_ROUTE
'15': B-LOCATION_STOP
'16': I-LOCATION_STOP
'17': B-LOCATION_STREET
'18': I-LOCATION_STREET
'19': B-NUMBER
'20': I-NUMBER
'21': B-ORGANIZATION
'22': I-ORGANIZATION
'23': B-ORGANIZATION_COMPANY
'24': I-ORGANIZATION_COMPANY
'25': B-ORG_POSITION
'26': I-ORG_POSITION
'27': B-PERSON
'28': I-PERSON
'29': B-TIME
'30': I-TIME
'31': B-TRIGGER
'32': I-TRIGGER
config_name: smartdata-v3_20200302
splits:
- name: train
num_bytes: 2124312
num_examples: 1861
- name: test
num_bytes: 266529
num_examples: 230
- name: validation
num_bytes: 258681
num_examples: 228
download_size: 18880782
dataset_size: 2649522
---
# Dataset Card for SmartData
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.dfki.de/web/forschung/projekte-publikationen/publikationen-uebersicht/publikation/9427/
- **Repository:** https://github.com/DFKI-NLP/smartdata-corpus
- **Paper:** https://www.dfki.de/fileadmin/user_upload/import/9427_lrec_smartdata_corpus.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DFKI SmartData Corpus is a dataset of 2598 German-language documents
which has been annotated with fine-grained geo-entities, such as streets,
stops and routes, as well as standard named entity types. It has also
been annotated with a set of 15 traffic- and industry-related n-ary
relations and events, such as Accidents, Traffic jams, Acquisitions,
and Strikes. The corpus consists of newswire texts, Twitter messages,
and traffic reports from radio stations, police and railway companies.
It allows for training and evaluating both named entity recognition
algorithms that aim for fine-grained typing of geo-entities, as well
as n-ary relation extraction systems.
### Supported Tasks and Leaderboards
NER
### Languages
German
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- id: an identifier for the article the text came from
- tokens: a list of string tokens for the text of the article
- ner_tags: a corresponding list of NER tags in the BIO format
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC-BY 4.0
### Citation Information
```
@InProceedings{SCHIERSCH18.85,
author = {Martin Schiersch and Veselina Mironova and Maximilian Schmitt and Philippe Thomas and Aleksandra Gabryszak and Leonhard Hennig},
title = "{A German Corpus for Fine-Grained Named Entity Recognition and Relation Extraction of Traffic and Industry Events}",
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year = {2018},
month = {May 7-12, 2018},
address = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
isbn = {979-10-95546-00-9},
language = {english}
}
```
### Contributions
Thanks to [@aseifert](https://github.com/aseifert) for adding this dataset. | 5,978 | [
[
-0.05047607421875,
-0.04766845703125,
0.030731201171875,
0.007572174072265625,
-0.018463134765625,
0.008453369140625,
-0.02642822265625,
-0.042327880859375,
0.040924072265625,
0.0288238525390625,
-0.049957275390625,
-0.0615234375,
-0.0391845703125,
0.0233612... |
ttc4900 | 2023-01-25T14:54:33.000Z | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:tr",
"license:unknown",
"news-category-classification",
"region:us"
] | null | The data set is taken from kemik group
http://www.kemik.yildiz.edu.tr/
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/10.1177/0165551515620551
If you use the dataset in a paper, please refer https://www.kaggle.com/savasy/ttc4900 as footnote and cite one of the papers as follows:
- A Comparison of Different Approaches to Document Representation in Turkish Language, SDU Journal of Natural and Applied Science, Vol 22, Issue 2, 2018
- A comparative analysis of text classification for Turkish language, Pamukkale University Journal of Engineering Science Volume 25 Issue 5, 2018
- A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014. | @article{doi:10.5505/pajes.2018.15931,
author = {Yıldırım, Savaş and Yıldız, Tuğba},
title = {A comparative analysis of text classification for Turkish language},
journal = {Pamukkale Univ Muh Bilim Derg},
volume = {24},
number = {5},
pages = {879-886},
year = {2018},
doi = {10.5505/pajes.2018.15931},
note ={doi: 10.5505/pajes.2018.15931},
URL = {https://dx.doi.org/10.5505/pajes.2018.15931},
eprint = {https://dx.doi.org/10.5505/pajes.2018.15931}
} | 2 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: TTC4900 - A Benchmark Data for Turkish Text Categorization
tags:
- news-category-classification
dataset_info:
features:
- name: category
dtype:
class_label:
names:
'0': siyaset
'1': dunya
'2': ekonomi
'3': kultur
'4': saglik
'5': spor
'6': teknoloji
- name: text
dtype: string
config_name: ttc4900
splits:
- name: train
num_bytes: 10640831
num_examples: 4900
download_size: 10627541
dataset_size: 10640831
---
# Dataset Card for TTC4900: A Benchmark Data for Turkish Text Categorization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [TTC4900 Homepage](https://www.kaggle.com/savasy/ttc4900)
- **Repository:** [TTC4900 Repository](https://github.com/savasy/TurkishTextClassification)
- **Paper:** [A Comparison of Different Approaches to Document Representation in Turkish Language](https://dergipark.org.tr/en/pub/sdufenbed/issue/38975/456349)
- **Point of Contact:** [Savaş Yıldırım](mailto:savasy@gmail.com)
### Dataset Summary
The data set is taken from [kemik group](http://www.kemik.yildiz.edu.tr/)
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study ["A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014"](https://link.springer.com/chapter/10.1007/978-3-642-54903-8_36)
If you use the dataset in a paper, please refer https://www.kaggle.com/savasy/ttc4900 as footnote and cite one of the papers as follows:
- A Comparison of Different Approaches to Document Representation in Turkish Language, SDU Journal of Natural and Applied Science, Vol 22, Issue 2, 2018
- A comparative analysis of text classification for Turkish language, Pamukkale University Journal of Engineering Science Volume 25 Issue 5, 2018
- A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Turkish.
## Dataset Structure
### Data Instances
A text classification dataset with 7 different news category.
Here is an example from the dataset:
```
{
"category": 0, # politics/siyaset
"text": "paris teki infaz imralı ile başlayan sürece bir darbe mi elif_çakır ın sunduğu söz_bitmeden in bugünkü konuğu gazeteci melih altınok oldu programdan satıbaşları imralı ile görüşmeler hangi aşamada bundan sonra ne olacak hangi kesimler sürece engel oluyor psikolojik mayınlar neler türk solu bu dönemde evrensel sorumluluğunu yerine getirebiliyor mu elif_çakır sordu melih altınok söz_bitmeden de yanıtladı elif_çakır pkk nın silahsızlandırılmasına yönelik olarak öcalan ile görüşme sonrası 3 kadının infazı enteresan çünkü kurucu isimlerden birisi sen nasıl okudun bu infazı melih altınok herkesin ciddi anlamda şüpheleri var şu an yürüttüğümüz herşey bir delile dayanmadığı için komple teorisinden ibaret kalacak ama şöyle bir durum var imralı görüşmelerin ilk defa bir siyasi iktidar tarafından açıkça söylendiği bir dönem ardından geliyor bu sürecin gerçekleşmemesini isteyen kesimler yaptırmıştır dedi"
}
```
### Data Fields
- **category** : Indicates to which category the news text belongs.
(Such as "politics", "world", "economy", "culture", "health", "sports", "technology".)
- **text** : Contains the text of the news.
### Data Splits
It is not divided into Train set and Test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
#### Who are the source language producers?
Turkish online news sites.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by [Savaş Yıldırım](https://github.com/savasy)
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{doi:10.5505/pajes.2018.15931,
author = {Yıldırım, Savaş and Yıldız, Tuğba},
title = {A comparative analysis of text classification for Turkish language},
journal = {Pamukkale Univ Muh Bilim Derg},
volume = {24},
number = {5},
pages = {879-886},
year = {2018},
doi = {10.5505/pajes.2018.15931},
note ={doi: 10.5505/pajes.2018.15931},
URL = {https://dx.doi.org/10.5505/pajes.2018.15931},
eprint = {https://dx.doi.org/10.5505/pajes.2018.15931}
}
```
### Contributions
Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset. | 6,494 | [
[
-0.03106689453125,
-0.03887939453125,
0.00753021240234375,
0.00922393798828125,
-0.03704833984375,
0.0013561248779296875,
-0.023834228515625,
-0.0168609619140625,
0.0181732177734375,
0.0221405029296875,
-0.01922607421875,
-0.07061767578125,
-0.058319091796875,
... |
Annabelleabbott/real-fake-news-workshop | 2022-01-07T00:45:18.000Z | [
"region:us"
] | Annabelleabbott | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
ApiInferenceTest/asr_dummy | 2022-02-14T11:18:56.000Z | [
"region:us"
] | ApiInferenceTest | Self-supervised learning (SSL) has proven vital for advancing research in
natural language processing (NLP) and computer vision (CV). The paradigm
pretrains a shared model on large volumes of unlabeled data and achieves
state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the
speech processing community lacks a similar setup to systematically explore the
paradigm. To bridge this gap, we introduce Speech processing Universal
PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the
performance of a shared model across a wide range of speech processing tasks
with minimal architecture changes and labeled data. Among multiple usages of the
shared model, we especially focus on extracting the representation learned from
SSL due to its preferable re-usability. We present a simple framework to solve
SUPERB tasks by learning task-specialized lightweight prediction heads on top of
the frozen shared model. Our results demonstrate that the framework is promising
as SSL representations show competitive generalizability and accessibility
across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a
benchmark toolkit to fuel the research in representation learning and general
speech processing.
Note that in order to limit the required storage for preparing this dataset, the
audio is stored in the .flac format and is not converted to a float32 array. To
convert, the audio file to a float32 array, please make use of the `.map()`
function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @article{DBLP:journals/corr/abs-2105-01051,
author = {Shu{-}Wen Yang and
Po{-}Han Chi and
Yung{-}Sung Chuang and
Cheng{-}I Jeff Lai and
Kushal Lakhotia and
Yist Y. Lin and
Andy T. Liu and
Jiatong Shi and
Xuankai Chang and
Guan{-}Ting Lin and
Tzu{-}Hsien Huang and
Wei{-}Cheng Tseng and
Ko{-}tik Lee and
Da{-}Rong Liu and
Zili Huang and
Shuyan Dong and
Shang{-}Wen Li and
Shinji Watanabe and
Abdelrahman Mohamed and
Hung{-}yi Lee},
title = {{SUPERB:} Speech processing Universal PERformance Benchmark},
journal = {CoRR},
volume = {abs/2105.01051},
year = {2021},
url = {https://arxiv.org/abs/2105.01051},
archivePrefix = {arXiv},
eprint = {2105.01051},
timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Check/region_9 | 2021-09-04T11:09:23.000Z | [
"region:us"
] | Check | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
Crives/haha | 2022-02-22T09:40:35.000Z | [
"region:us"
] | Crives | null | null | 1 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
DSCI511G1/COP26_Energy_Transition_Tweets | 2021-12-06T17:53:41.000Z | [
"region:us"
] | DSCI511G1 | null | null | 2 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
Doohae/modern_music_re | 2021-12-06T05:58:20.000Z | [
"region:us"
] | Doohae | null | null | 0 | 85 | 2022-03-02T23:29:22 | Datasets for Relation Extraction Task
Source from Wikipedia (CC-BY-2.0)
Contributors : Doohae Jung, Hyesu Kim, Bosung Kim, Isaac Park, Miwon Jeon, Dagon Lee, Jihoo Kim | 173 | [
[
-0.02716064453125,
-0.0236968994140625,
0.04779052734375,
0.005401611328125,
-0.0028076171875,
-0.048248291015625,
-0.0170135498046875,
-0.030609130859375,
-0.006954193115234375,
0.056915283203125,
-0.05535888671875,
-0.026580810546875,
-0.043609619140625,
0... |
DrishtiSharma/mr_opus100_processed | 2022-02-09T14:28:39.000Z | [
"region:us"
] | DrishtiSharma | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DrishtiSharma/or_opus100_processed | 2022-02-10T03:12:01.000Z | [
"region:us"
] | DrishtiSharma | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Dumiiii/common-voice-romaniarss | 2022-01-11T11:29:09.000Z | [
"region:us"
] | Dumiiii | null | null | 0 | 85 | 2022-03-02T23:29:22 | This datasets consists in the last version of the common-voice-dataset for romanian language.
Also contains data from RSS (Romanian Speech Synthesis Dataset) from this site http://romaniantts.com/ | 197 | [
[
-0.021087646484375,
-0.0166778564453125,
0.0148468017578125,
0.011749267578125,
-0.0199737548828125,
0.0173797607421875,
-0.0232086181640625,
-0.0088958740234375,
0.038482666015625,
0.041656494140625,
-0.06842041015625,
-0.056976318359375,
-0.007389068603515625,... |
Fraser/mnist-text-no-spaces | 2021-02-05T16:03:35.000Z | [
"region:us"
] | Fraser | MNIST dataset adapted to a text-based representation.
This allows testing interpolation quality for Transformer-VAEs.
System is heavily inspired by Matthew Rayfield's work https://youtu.be/Z9K3cwSL6uM
Works by quantising each MNIST pixel into one of 64 characters.
Every sample has an up & down version to encourage the model to learn rotation invarient features.
Use `.array_to_text(` and `.text_to_array(` methods to test your generated data.
Removed spaces to get better BPE compression on sequences.
**Should only be used with a trained tokenizer.**
Data format:
- text: (30 x 28 tokens, 840 tokens total): Textual representation of MNIST digit, for example:
```
00down!!!!!!!!!!!!!!!!!!!!!!!!!!!!
01down!!!!!!!!!!!!!!!!!!!!!!!!!!!!
02down!!!!!!!!!!!!!!!!!!!!!!!!!!!!
03down!!!!!!!!!!!!!!!!!!!!!!!!!!!!
04down!!!!!!!!!!!!!!!!!!!!!!!!!!!!
05down!!!!!!!!!!!!!%%%@CL'Ja^@!!!!
06down!!!!!!!!(*8GK`````YL`]Q1!!!!
07down!!!!!!!-\\````````_855/*!!!!!
08down!!!!!!!%W`````RN^]!!!!!!!!!!
09down!!!!!!!!5H;``T#!+G!!!!!!!!!!
10down!!!!!!!!!$!G`7!!!!!!!!!!!!!!
11down!!!!!!!!!!!C`P!!!!!!!!!!!!!!
12down!!!!!!!!!!!#P`2!!!!!!!!!!!!!
13down!!!!!!!!!!!!)]YI<!!!!!!!!!!!
14down!!!!!!!!!!!!!5]``>'!!!!!!!!!
15down!!!!!!!!!!!!!!,O``F'!!!!!!!!
16down!!!!!!!!!!!!!!!%8``O!!!!!!!!
17down!!!!!!!!!!!!!!!!!_`_1!!!!!!!
18down!!!!!!!!!!!!!!,AN``T!!!!!!!!
19down!!!!!!!!!!!!*FZ```_N!!!!!!!!
20down!!!!!!!!!!'=X````S4!!!!!!!!!
21down!!!!!!!!&1V````R5!!!!!!!!!!!
22down!!!!!!%KW````Q5#!!!!!!!!!!!!
23down!!!!.LY````^B#!!!!!!!!!!!!!!
24down!!!!C```VBB%!!!!!!!!!!!!!!!!
25down!!!!!!!!!!!!!!!!!!!!!!!!!!!!
26down!!!!!!!!!!!!!!!!!!!!!!!!!!!!
27down!!!!!!!!!!!!!!!!!!!!!!!!!!!!
```
- label: Just a number with the texts matching label. | @dataset{dataset,
author = {Fraser Greenlee},
year = {2021},
month = {2},
pages = {},
title = {MNIST text dataset (no spaces).},
doi = {}
} | 1 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Fraser/python-lines | 2021-02-22T10:20:34.000Z | [
"region:us"
] | Fraser | Dataset of single lines of Python code taken from the [CodeSearchNet](https://github.com/github/CodeSearchNet) dataset.
Context
This dataset allows checking the validity of Variational-Autoencoder latent spaces by testing what percentage of random/intermediate latent points can be greedily decoded into valid Python code.
Content
Each row has a parsable line of source code.
{'text': '{python source code line}'}
Most lines are < 100 characters while all are under 125 characters.
Contains 2.6 million lines.
All code is in parsable into a python3 ast. | @dataset{dataset,
author = {Fraser Greenlee},
year = {2020},
month = {12},
pages = {},
title = {Python single line dataset.},
doi = {}
} | 1 | 85 | 2022-03-02T23:29:22 | Dataset of single lines of Python code taken from the [CodeSearchNet](https://github.com/github/CodeSearchNet) dataset.
Context
This dataset allows checking the validity of Variational-Autoencoder latent spaces by testing what percentage of random/intermediate latent points can be greedily decoded into valid Python code.
Content
Each row has a parsable line of source code.
{'text': '{python source code line}'}
Most lines are < 100 characters while all are under 125 characters.
Contains 2.6 million lines.
All code is in parsable into a python3 ast.
| 561 | [
[
-0.0244140625,
-0.036590576171875,
-0.0055694580078125,
0.029937744140625,
0.0039825439453125,
-0.033966064453125,
0.00691986083984375,
0.0100555419921875,
0.00930023193359375,
0.05706787109375,
-0.040985107421875,
-0.034912109375,
0.0022144317626953125,
0.0... |
Fraser/wiki_sentences | 2021-07-21T07:43:08.000Z | [
"region:us"
] | Fraser | null | null | 0 | 85 | 2022-03-02T23:29:22 | # Wiki Sentences
A dataset of all english sentences in Wikipedia.
Taken from the OPTIMUS project. https://github.com/ChunyuanLI/Optimus/blob/master/download_datasets.md
The dataset is 11.8GB so best to load it using streaming:
```python
from datasets import load_dataset
dataset = load_dataset("Fraser/wiki_sentences", split='train', streaming=True)
```
| 358 | [
[
-0.0258026123046875,
-0.0340576171875,
0.0165557861328125,
-0.00341796875,
-0.0237884521484375,
-0.0229644775390625,
-0.02984619140625,
-0.00829315185546875,
0.04534912109375,
0.040557861328125,
-0.049346923828125,
-0.007007598876953125,
-0.0124969482421875,
... |
GEM/Taskmaster | 2022-10-24T15:30:09.000Z | [
"task_categories:conversational",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"dialog-response-generation",
"arxiv:2012.12458",
"region:us"
] | GEM | The Taskmaster-3 (aka TicketTalk) dataset consists of 23,789 movie ticketing dialogs
(located in Taskmaster/TM-3-2020/data/). By "movie ticketing" we mean conversations
where the customer's goal is to purchase tickets after deciding on theater, time,
movie name, number of tickets, and date, or opt out of the transaction.
The columns are gem_id, 0, 1 for serial numbering, 2 for the text dialog and id
for the default id by the authors. | @article{byrne2020tickettalk,
title={TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems},
author={Byrne, Bill and Krishnamoorthi, Karthik and Ganesh, Saravanan and Kale, Mihir Sanjay},
journal={arXiv preprint arXiv:2012.12458},
year={2020}
} | 1 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- cc-by-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- conversational
task_ids: []
pretty_name: Taskmaster
tags:
- dialog-response-generation
---
# Dataset Card for GEM/Taskmaster
## Dataset Description
- **Homepage:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
- **Repository:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
- **Paper:** https://arxiv.org/abs/2012.12458
- **Leaderboard:** N/A
- **Point of Contact:** Karthik Krishnamoorthi
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/Taskmaster).
### Dataset Summary
This is a large task-oriented dialog dataset in which a model has to produce the response. The input contains the context and a structured representation of what the model is supposed to generate. The input is already pre-formatted as string, turning this into a pure text-to-text problem.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/Taskmaster')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/Taskmaster).
#### website
[Github](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
#### paper
[Arxiv](https://arxiv.org/abs/2012.12458)
#### authors
Google researchers
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[Arxiv](https://arxiv.org/abs/2012.12458)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@article{byrne2020tickettalk,
title={TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems},
author={Byrne, Bill and Krishnamoorthi, Karthik and Ganesh, Saravanan and Kale, Mihir Sanjay},
journal={arXiv preprint arXiv:2012.12458},
year={2020}
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Karthik Krishnamoorthi
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
krishnamoorthi@google.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
NA
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
NA
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-4.0: Creative Commons Attribution 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Dialogues
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Dialog Response Generation
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
a movie ticketing dialog dataset with 23,789 annotated conversations.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`other`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
NA
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Google researchers
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Google
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Tosin Adewumi (Luleå University of Technology)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `gem_id`: The unique example id
- `context`: The context of the conversation
- `target`: A string representing the target
-`references`: A List representing the target(s)
-`conversation_id`: A unique ID of the conversation
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
NA
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
NA
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{'context': "<PR>get_movie_attribute<PRAN>rating.movie<PRAV>rated R<C><U>I wanna see a movie<A>where are you?<U>spring hills kansas<PN>find_theaters<PAN>location<PAV>spring hills kansas<PR>find_theaters<PRAN>name.theater<PRAV>AMC Holiday Theater<PRAV>Cinemark Downtown<A>there are 2 theaters near you, the AMC Holiday Theater and Cinemark Downtown. Did you know which movie you'd like to see?<U>funny one please<PN>find_movies<PAN>location<PAV>spring hills kansas<PR>find_movies<PRAN>name.movie<PRAV>Not My Problem<PRAV>Family Jewels<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.genre<PR>get_movie_attribute<PRAN>name.genre<PRAV>comedy<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Matt Damon<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Noah Schnapp<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.genre<PR>get_movie_attribute<PRAN>name.genre<PRAV>romantic comedy<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Melissa McCarthy<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Ryan Reynolds<A>There's the comedy film called Not My Problem starring Matt Damon and Noah Schnapp. There's also a romantic comedy called Family Jewels starring Melissa McCarthy and Ryan Reynolds.<U>what ratings are there?<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>rating.movie<PR>get_movie_attribute<PRAN>rating.movie<PRAV>rated PG-13<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>rating.movie",
'conversation_id': 'dlg-d1f52e7e-c34c-4e85-b406-85ed138b5068',
'gem_id': 'Taskmaster-train-0',
'references': ['Not My Problem is rated PG-13 and Family Jewels is rated R.'],
'target': 'Not My Problem is rated PG-13 and Family Jewels is rated R.'}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
-`train`: 187182 examples
-`dev`: 23406 examples
-`test`: 23316 examples
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
NA
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
NA
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
Dialogue generation that makes sense
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
NA
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
NA
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
gem_id field was added to the 3 data splits
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
#### Technical Terms
<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
NA
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
BLEU: 60
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
automatic evaluation
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
NA
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
NA
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
NA
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
a movie ticketing dialog dataset with 23,789 annotated conversations.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Crowdsourced`
#### Where was it crowdsourced?
<!-- info: If crowdsourced, where from? -->
<!-- scope: periscope -->
`Participatory experiment`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
NA
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
Ticketing
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
NA
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
It's based on ticketing without personal information
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
NA
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
NA
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
NA
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
NA
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
NA
| 17,520 | [
[
-0.0258026123046875,
-0.056121826171875,
0.026519775390625,
-0.01398468017578125,
-0.0026092529296875,
0.00569915771484375,
-0.009521484375,
-0.0122528076171875,
0.023529052734375,
0.0372314453125,
-0.0633544921875,
-0.04931640625,
-0.042327880859375,
0.0052... |
GEM/dstc10_track2_task2 | 2022-10-24T15:30:17.000Z | [
"task_categories:conversational",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"dialog-response-generation",
"region:us"
] | GEM | \ | @article{kim2020domain,
title={Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access},
author={Seokhwan Kim and Mihail Eric and Karthik Gopalakrishnan and Behnam Hedayatnia and Yang Liu and Dilek Hakkani-Tur},
journal={arXiv preprint arXiv:2006.03533}
year={2020}
} | 4 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- apache-2.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- conversational
task_ids: []
pretty_name: dstc10_track2_task2
tags:
- dialog-response-generation
---
# Dataset Card for GEM/dstc10_track2_task2
## Dataset Description
- **Homepage:** https://github.com/alexa/alexa-with-dstc10-track2-dataset
- **Repository:** https://github.com/alexa/alexa-with-dstc10-track2-dataset
- **Paper:** https://assets.amazon.science/54/a1/5282d47044179737b4289622c824/how-robust-are-you-evaluating-task-oriented-dialogue-systems-on-spoken-conversations.pdf
- **Leaderboard:** https://eval.ai/challenge/1663/overview
- **Point of Contact:** Seokhwan Kim
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/dstc10_track2_task2).
### Dataset Summary
The DSTC10 Track2 Task 2 follows the DSTC9 Track1 task, where participants have to implement knowledge-grounded dialog systems.
The training dataset is inherited from the DSTC9 challenge and is in the written domain, while the test set is newly collected and consists of noisy ASR transcripts.
Hence, the dataset facilitates building models for grounded dialog response generation.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/dstc10_track2_task2')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/dstc10_track2_task2).
#### website
https://github.com/alexa/alexa-with-dstc10-track2-dataset
#### paper
https://assets.amazon.science/54/a1/5282d47044179737b4289622c824/how-robust-are-you-evaluating-task-oriented-dialogue-systems-on-spoken-conversations.pdf
#### authors
Seokhwan Kim, Yang Liu, Di Jin, Alexandros Papangelis, Karthik Gopalakrishnan, Behnam Hedayatnia, Dilek Hakkani-Tur (Amazon Alexa AI)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
https://github.com/alexa/alexa-with-dstc10-track2-dataset
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
https://github.com/alexa/alexa-with-dstc10-track2-dataset
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://assets.amazon.science/54/a1/5282d47044179737b4289622c824/how-robust-are-you-evaluating-task-oriented-dialogue-systems-on-spoken-conversations.pdf
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{kim2021robust,
title={" How Robust ru?": Evaluating Task-Oriented Dialogue Systems on Spoken Conversations},
author={Kim, Seokhwan and Liu, Yang and Jin, Di and Papangelis, Alexandros and Gopalakrishnan, Karthik and Hedayatnia, Behnam and Hakkani-Tur, Dilek},
journal={IEEE Automatic Speech Recognition and Understanding Workshop},
year={2021}
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Seokhwan Kim
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
seokhwk@amazon.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
https://eval.ai/challenge/1663/overview
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
It evaluates the models based on the automatic metrics defined in the task paper for the three tasks of detection, selection and generation.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`En`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
apache-2.0: Apache License 2.0
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
To conduct research on dialogue state tracking and knowledge-grounded response generation.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Dialog Response Generation
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
This dataset aims to explore the robustness of conversational models when trained on spoken data. It has two aspects, multi-domain dialogue state tracking and conversation modeling with access to unstructured knowledge.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Amazon
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Seokhwan Kim, Yang Liu, Di Jin, Alexandros Papangelis, Karthik Gopalakrishnan, Behnam Hedayatnia, Dilek Hakkani-Tur (Amazon Alexa AI)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Amazon
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Alexandros Papangelis (Amazon Alexa AI), Di Jin (Amazon Alexa AI), Nico Daheim (RWTH Aachen University)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
features = datasets.Features(
{
"id": datasets.Value("string"),
"gem_id": datasets.Value("string"),
"turns": [
{
"speaker": datasets.Value("string"),
"text": datasets.Value("string"),
"nbest": [
{
"hyp": datasets.Value("string"),
"score": datasets.Value("float"),
}
],
}
],
"knowledge": {
"domain": datasets.Value("string"),
"entity_name": datasets.Value("string"),
"title": datasets.Value("string"),
"body": datasets.Value("string"),
},
"response": datasets.Value("string"),
"source": datasets.Value("string"),
"linearized_input": datasets.Value("string"),
"target": datasets.Value("string"),
"references": [datasets.Value("string")],
}
)
nbest contains an nbest list of outputs generated by an ASR system along with their scores.
knowledge defines the annotated grounding as well as its metadata
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
It was kept compatible with MultiWox 2.X data.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{'id': '0',
'gem_id': 'GEM-dstc10_track2_task2-test-0',
'turns': [{'speaker': 'U',
'text': "hi uh i'm looking for restaurant in lower ha",
'nbest': [{'hyp': "hi uh i'm looking for restaurant in lower ha",
'score': -25.625450134277344},
{'hyp': "hi uh i'm looking for restaurant in lower hai",
'score': -25.969446182250977},
{'hyp': "hi uh i'm looking for restaurant in lower haig",
'score': -32.816890716552734},
{'hyp': "hi uh i'm looking for restaurant in lower haigh",
'score': -32.84316635131836},
{'hyp': "hi uh i'm looking for restaurant in lower hag",
'score': -32.8637580871582},
{'hyp': "hi uh i'm looking for restaurant in lower hah",
'score': -33.1048698425293},
{'hyp': "hi uh i'm looking for restaurant in lower hait",
'score': -33.96509552001953},
{'hyp': "hi um i'm looking for restaurant in lower hai",
'score': -33.97885513305664},
{'hyp': "hi um i'm looking for restaurant in lower haig",
'score': -34.56083679199219},
{'hyp': "hi um i'm looking for restaurant in lower haigh",
'score': -34.58711242675781}]},
{'speaker': 'S',
'text': 'yeah definitely i can go ahead and help you with that ummm what kind of option in a restaurant are you looking for',
'nbest': []},
{'speaker': 'U',
'text': 'yeah umm am looking for an expensive restaurant',
'nbest': [{'hyp': 'yeah umm am looking for an expensive restaurant',
'score': -21.272899627685547},
{'hyp': 'yeah umm m looking for an expensive restaurant',
'score': -21.444047927856445},
{'hyp': 'yeah umm a m looking for an expensive restaurant',
'score': -21.565458297729492},
{'hyp': 'yeah ummm am looking for an expensive restaurant',
'score': -21.68832778930664},
{'hyp': 'yeah ummm m looking for an expensive restaurant',
'score': -21.85947608947754},
{'hyp': 'yeah ummm a m looking for an expensive restaurant',
'score': -21.980886459350586},
{'hyp': "yeah umm a'm looking for an expensive restaurant",
'score': -22.613924026489258},
{'hyp': "yeah ummm a'm looking for an expensive restaurant",
'score': -23.02935218811035},
{'hyp': 'yeah um am looking for an expensive restaurant',
'score': -23.11180305480957},
{'hyp': 'yeah um m looking for an expensive restaurant',
'score': -23.28295135498047}]},
{'speaker': 'S',
'text': "lemme go ahead and see what i can find for you ok great so i do ummm actually no i'm sorry is there something else i can help you find i don't see anything expensive",
'nbest': []},
{'speaker': 'U',
'text': "sure ummm maybe if you don't have anything expensive how about something in the moderate price range",
'nbest': [{'hyp': "sure ummm maybe if you don't have anything expensive how about something in the moderate price range",
'score': -27.492507934570312},
{'hyp': "sure umm maybe if you don't have anything expensive how about something in the moderate price range",
'score': -27.75853729248047},
{'hyp': "sure ummm maybe if you don't have anything expensive how about something in the moderate price rang",
'score': -29.44410514831543},
{'hyp': "sure umm maybe if you don't have anything expensive how about something in the moderate price rang",
'score': -29.710134506225586},
{'hyp': "sure um maybe if you don't have anything expensive how about something in the moderate price range",
'score': -31.136560440063477},
{'hyp': "sure um maybe if you don't have anything expensive how about something in the moderate price rang",
'score': -33.088157653808594},
{'hyp': "sure ummm maybe i you don't have anything expensive how about something in the moderate price range",
'score': -36.127620697021484},
{'hyp': "sure umm maybe i you don't have anything expensive how about something in the moderate price range",
'score': -36.39365005493164},
{'hyp': "sure ummm maybe if yo don't have anything expensive how about something in the moderate price range",
'score': -36.43605041503906},
{'hyp': "sure umm maybe if yo don't have anything expensive how about something in the moderate price range",
'score': -36.70207977294922}]},
{'speaker': 'S',
'text': 'ok moderate lemme go ahead and check to see what i can find for moderate ok great i do have several options coming up how does the view lounge sound',
'nbest': []},
{'speaker': 'U',
'text': 'that sounds good ummm do they have any sort of happy hour special',
'nbest': [{'hyp': 'that sounds good ummm do they have any sort of happy hour special',
'score': -30.316478729248047},
{'hyp': 'that sounds good umm do they have any sort of happy hour special',
'score': -30.958009719848633},
{'hyp': 'that sounds good um do they have any sort of happy hour special',
'score': -34.463165283203125},
{'hyp': 'that sounds good ummm do they have any sirt of happy hour special',
'score': -34.48350143432617},
{'hyp': 'that sounds good umm do they have any sirt of happy hour special',
'score': -35.12503433227539},
{'hyp': 'that sounds good ummm do they have any sord of happy hour special',
'score': -35.61939239501953},
{'hyp': 'that sounds good umm do they have any sord of happy hour special',
'score': -36.26092529296875},
{'hyp': 'that sounds good ummm do they have any sont of happy hour special',
'score': -37.697105407714844},
{'hyp': 'that sounds good umm do they have any sont of happy hour special',
'score': -38.33863830566406},
{'hyp': 'that sounds good um do they have any sirt of happy hour special',
'score': -38.630191802978516}]}],
'knowledge': {'domain': 'restaurant',
'entity_name': 'The View Lounge',
'title': 'Does The View Lounge offer happy hour?',
'body': 'The View Lounge offers happy hour.'},
'response': 'uhhh great question lemme go ahead and check that out for you ok fantastic so it looks like they do offer happy hour',
'source': 'sf_spoken',
'linearized_input': "<U> hi uh i'm looking for restaurant in lower ha <S> yeah definitely i can go ahead and help you with that ummm what kind of option in a restaurant are you looking for <U> yeah umm am looking for an expensive restaurant <S> lemme go ahead and see what i can find for you ok great so i do ummm actually no i'm sorry is there something else i can help you find i don't see anything expensive <U> sure ummm maybe if you don't have anything expensive how about something in the moderate price range <S> ok moderate lemme go ahead and check to see what i can find for moderate ok great i do have several options coming up how does the view lounge sound <U> that sounds good ummm do they have any sort of happy hour special || knowledge domain: restaurant, entity: The View Lounge, title: Does The View Lounge offer happy hour?, information: The View Lounge offers happy hour.",
'target': 'uhhh great question lemme go ahead and check that out for you ok fantastic so it looks like they do offer happy hour',
'references': ['uhhh great question lemme go ahead and check that out for you ok fantastic so it looks like they do offer happy hour']}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
train: training set, val: validation set, test: test set
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The track dataset originally only consists of a validation and test set in the spoken domain with noisy ASR transcripts.
The training set is taken from the predecessor task DSTC9 Track 1 and contains written conversations.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset can be used to evaluate conversational models on spoken inputs (using ASR hypotheses). In particular, we can evaluate the models’ ability to understand language by tracking the dialogue state, and their ability to generate knowledge-grounded responses.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
This dataset contains transcribed spoken interactions.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
We can measure the model’s ability to understand language and to generate knowledge-grounded responses.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
This dataset can be used to evaluate conversational models on spoken inputs (using ASR hypotheses). In particular, we can evaluate the models’ ability to generate knowledge-grounded responses.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
BLEU-1, BLEU-2, BLEU-3, BLEU-4, METEOR, ROGUE-1, ROGUE-2, ROGUE-L
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
We want to explore how conversational models perform on spoken data.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
This dataset aims to explore the robustness of conversational models when evaluated on spoken data. It has two aspects, multi-domain dialogue state tracking and conversation modeling with access to unstructured knowledge.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Other`
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The conversations revolve around 5 domains (or topics): hotels, restaurants, attractions, taxi, train.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
The subjects were instructed to conduct fictional conversations about booking restaurants or requesting fictional information.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
There should be no risk related to PII as the subjects conduct fictional conversations.
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
| 23,017 | [
[
-0.0300140380859375,
-0.0689697265625,
0.02825927734375,
-0.0150604248046875,
0.0008697509765625,
0.005931854248046875,
-0.021026611328125,
-0.01568603515625,
0.00428009033203125,
0.027313232421875,
-0.05242919921875,
-0.050323486328125,
-0.042388916015625,
... |
GEM/surface_realisation_st_2020 | 2022-10-24T15:30:30.000Z | [
"task_categories:table-to-text",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:ar",
"language:zh",
"language:en",
"language:fr",
"language:hi",
"language:id",
"language:ja",
"language:ko... | GEM | null | null | 0 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- ar
- zh
- en
- fr
- hi
- id
- ja
- ko
- pt
- ru
- es
license:
- cc-by-2.5
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: surface_realisation_st_2020
tags:
- data-to-text
---
# Dataset Card for GEM/surface_realisation_st_2020
## Dataset Description
- **Homepage:** http://taln.upf.edu/pages/msr2020-ws/SRST.html#data
- **Repository:** https://sites.google.com/site/genchalrepository/surface-realisation/sr-20-multilingual
- **Paper:** https://aclanthology.org/2020.msr-1.1/
- **Leaderboard:** N/A
- **Point of Contact:** Simon Mille
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/surface_realisation_st_2020).
### Dataset Summary
This dataset was used as part of the multilingual surface realization shared task in which a model gets full or partial universal dependency structures and has to reconstruct the natural language. This dataset support 11 languages.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/surface_realisation_st_2020')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/surface_realisation_st_2020).
#### website
[Website](http://taln.upf.edu/pages/msr2020-ws/SRST.html#data)
#### paper
[ACL Anthology](https://aclanthology.org/2020.msr-1.1/)
#### authors
Simon Mille (Pompeu Fabra University); Leo Wanner (Pompeu Fabra University); Anya Belz (Brighton University); Bernd Bohnet (Google Inc.); Thiago Castro Ferreira (Federal University of Minas Gerais); Yvette Graham (ADAPT/Trinity College Dublin)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Website](http://taln.upf.edu/pages/msr2020-ws/SRST.html#data)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Website](https://sites.google.com/site/genchalrepository/surface-realisation/sr-20-multilingual)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/2020.msr-1.1/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{mille-etal-2020-third,
title = "The Third Multilingual Surface Realisation Shared Task ({SR}{'}20): Overview and Evaluation Results",
author = "Mille, Simon and
Belz, Anya and
Bohnet, Bernd and
Castro Ferreira, Thiago and
Graham, Yvette and
Wanner, Leo",
booktitle = "Proceedings of the Third Workshop on Multilingual Surface Realisation",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.msr-1.1",
pages = "1--20",
abstract = "This paper presents results from the Third Shared Task on Multilingual Surface Realisation (SR{'}20) which was organised as part of the COLING{'}20 Workshop on Multilingual Surface Realisation. As in SR{'}18 and SR{'}19, the shared task comprised two tracks: (1) a Shallow Track where the inputs were full UD structures with word order information removed and tokens lemmatised; and (2) a Deep Track where additionally, functional words and morphological information were removed. Moreover, each track had two subtracks: (a) restricted-resource, where only the data provided or approved as part of a track could be used for training models, and (b) open-resource, where any data could be used. The Shallow Track was offered in 11 languages, whereas the Deep Track in 3 ones. Systems were evaluated using both automatic metrics and direct assessment by human evaluators in terms of Readability and Meaning Similarity to reference outputs. We present the evaluation results, along with descriptions of the SR{'}19 tracks, data and evaluation methods, as well as brief summaries of the participating systems. For full descriptions of the participating systems, please see the separate system reports elsewhere in this volume.",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Simon Mille
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
sfmille@gmail.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
No multiple dialects.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`Arabic`, `Chinese`, `English`, `French`, `Hindi`, `Indonesian`, `Japanese`, `Korean`, `Portuguese`, `Russian`, `Spanish, Castilian`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
Unknown
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-2.5: Creative Commons Attribution 2.5 Generic
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset is intended to be used for training models to solve several NLG subtasks, such as function word introduction, morphological agreement resolution, word order determination and inflection generation.
Comment about the license: the dataset has multiple licences, since each original dataset has their own type of licence. All datasets but one are CC-BY and subclasses of it, the other one is GPL (French Sequoia).
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
The models are able to introduce surface features (syntax, morphology, topology) from more or less abstract inputs in different, the most abstract being predicate-argument structures. The datasets cover a large variety of domains (news, blogs, forums, wikipedia pages, etc.).
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`, `academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Pompeu Fabra University, Google Inc., University of Brighton, Federal University of Minas Gerais, ADAPT/Trinity College Dublin
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Simon Mille (Pompeu Fabra University); Leo Wanner (Pompeu Fabra University); Anya Belz (Brighton University); Bernd Bohnet (Google Inc.); Thiago Castro Ferreira (Federal University of Minas Gerais); Yvette Graham (ADAPT/Trinity College Dublin)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Mostly EU funds via H2020 projects
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Simon Mille (Pompeu Fabra University)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
`input` (string): this field contains an input tree in CoNLL-U format; the CoNLL-U format is a one-word-per-line format with the following tab-separated 10 columns (see [here](http://universaldependencies.org/format.html)): [1] Position, [2] Lemma, [3] Wordform, [4] Part of Speech, [5] Fine-grained Part of Speech (if available), [6] Features (FEATS), [7] governor, [8] dependency relation, [9] additional dependency information, and [10] metadata. For the surface task, the input is a Universal Dependency tree of a given language in which the word order was scrambled and the surface forms removed (only lemmas are available); for the deep task, the input is a tree derived from the surface input, with predicate-argument relations between content words only (function words were removed) and without any morphological agreement information.
`target_tokenized` (string): this field contains the target sentence to generate, in which every non-initial and non-final token is surrounded by two spaces. This output is usually used for automatic evaluations.
`target` (string): this field contains the detokenised target sentence to generate. This output is usually used for human evaluations.
`gem_id` (string): a unique ID.
`sentence_id` (string): the original ID of a sentence in the UD dataset.
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The structure of the input (CoNLL-U) was chosen according to the standards in parsing, and because the original UD datasets were provided in this format.
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
The input labels for the surface track are the original labels in the UD treebanks; see [here](https://universaldependencies.org/u/dep/index.html) for the dependencies, [here](https://universaldependencies.org/u/feat/index.html) for the features, and [here](https://universaldependencies.org/u/pos/index.html) for the PoS tags.
The input labels for the deep track are a subset of the PoS tags and features of the surface track, and for the relations, universal predicate-argument relations augmented with a few specific relations to capture coordinations and named entity relations for instance.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{"input": "1\tGoogle\t_\tPROPN\tNNP\tNumber=Sing\t5\tnsubj\t_\t_\n2\t\t_\tPUNCT\t.\tlin=+1\t5\tpunct\t_\t_\n3\tinto\t_\tADP\tIN\t_\t6\tcase\t_\t_\n4\tif\t_\tSCONJ\tIN\t_\t5\tmark\t_\t_\n5\tmorph\t_\tVERB\tVBD\tMood=Ind|Tense=Past|VerbForm=Fin\t7\tadvcl\t_\t_\n6\tGoogleOS\t_\tPROPN\tNNP\tNumber=Sing\t5\tobl\t_\t_\n7\twhat\t_\tPRON\tWP\tPronType=Int\t0\troot\t_\t_", "target_tokenized": "What if Google Morphed Into GoogleOS ?", "target": "What if Google Morphed Into GoogleOS?", "gem_id": "GEM-surface_realisation_st_2020-T1-test-en_ewt-ud-test-0", "sentence_id": ""}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
There are 119 splits in the dataset:
- 29 training sets, which correspond to 20 UD datasets (11 languages), 9 of which have both surface and deep inputs (3 languages);
- 29 development set which correspond to the 29 training sets above;
- 29 test sets for the data described above;
- 4 out-of-domain test sets, 3 surface inputs and 1 deep one (3 languages for which PUD out-of-domain datasets were available);
- 9 automatically parsed in-domain test sets, 6 surface inputs and 3 deep inputs (6 languages for which good UD parsers were available);
- 9 automatically parsed out-of-domain test sets, 6 surface inputs and 3 deep inputs (6 languages for which we were able to create clean Wikipedia text and that had a good UD parser).
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
Described above for more clarity.
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
An outlier would usually be an input that corresponds to a very long sentence (e.g. 159 words in English, when the average number of words per sentence is around 25).
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
The datset includes languages from different families and some languages not often used in NLG (e.g. Arabic, Indonesian, Korean, Hindi). It proposes two tasks, which can be tackled both separately and in one shot, with different levels of difficulty: the most superficial task (T1) consits in ordering and inflecting some trees, and the deeper task (T2) includes extra tasks such as defining the syntactic structure and introducing function words and morphological agreement information. Both tasks can allow for developing modules for pipeline NLG architectures. T1 is rather straightforward to evaluate: BLEU works quite well for some languages since all the words are present in the input and few word orders only can be possible for a syntactic tree. But T2 is more challenging to evaluate, since more outputs are correct given one particular input.
There is a large variety of sizes in the datasets, both clean and noisy data, parallel data in different languages, and many already available system outputs to use as baselines.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
This is possibly the only dataset that starts the generation process from predicate-argument structures and from syntactic structures. It also has parallel datasets in a few languages (coming from the PUD parallel annotations).
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Syntacticisation, functional word introduction, word order resolution, agreement resolution, morphological inflection
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
[Website](http://taln.upf.edu/pages/msr2020-ws/SRST.html)
#### Technical Terms
<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
Syntacticisation: prediction of the syntactic
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Syntacticisation, functional word introduction, word order resolution, morphological agreement resolution, morphological inflection
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`, `BERT-Score`, `Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
NIST: n-gram similarity metric weighted in favour of less frequent n-grams which are taken to be more informative.
Normalised edit distance (DIST): inverse, normalised, character-based string-edit distance that starts by computing the minimum number of character inserts, deletes and substitutions (all at cost 1) required to turn the system output into the (single) reference text.
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
BLEU, NIST, BERTScore and DIST simply aim at calculating in different ways the similarity between a predicted and a reference sentence.
Two additional criteria have been used for human evaluation, Readability and Meaning SImilarity. The statement to be assessed in the Readability evaluation was: "The text reads well and is free from grammatical errors and awkward constructions.". The corresponding statement in the Meaning Similarity evaluation, in which system outputs (‘the black text’) were compared to reference sentences (‘the gray text’), was: "The meaning of the gray text is adequately expressed by the black text."
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
Same as above.
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
- [Fast and Accurate Non-Projective Dependency Tree Linearization](https://aclanthology.org/2020.acl-main.134/)
- [Shape of Synth to Come: Why We Should Use Synthetic Data for English Surface Realization](https://aclanthology.org/2020.acl-main.665/)
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The datasets were created in the context of the Surface Realisation Shared Task series.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The dataset's objective was to allow for training systems to perform tasks related to surface realisation (introduction of function words, syntacticisation, resolution of morphological agreements, word order resolution, inflection generation.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
Each of the 20 used UD datasets comes from various sources, all listed on the individual page of each UD treeebank (https://universaldependencies.org/).
Additional test sets were created for the task, and were obtained from Wikipedia pages for 6 languages.
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Multiple websites`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
There are numerous sources of language in the multiple datasets.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
There is a large variety of topics in the multiple datasets.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
The text data was detokenised so as to create references for automatic evaluations (several languages don't use spaces to separate words, and running metrics like BLEU would not make sense without separating all the tokens in a sentence).
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
hybrid
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
For the Wikipedia test created for the shared task, extensive filtering was applied to achieve reasonably good text quality. Sentences that include special characters, contain unusual tokens (e.g. ISBN), or have unbalanced quotation marks or brackets were skipped. Furthermore, only sentences with more than 5 tokens and shorter than 50 tokens were selected. After the initial filtering, quite a few malformed sentences remained. In order to remove those, the sentences were scored with BERT and
only the top half scored sentences were kept. Finally, via manual inspection, patterns and expressions were identified to
further reduce the number of malformed sentences.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
The Universal Dependency data had been previously used for shared tasks on parsing, so it made sense to reuse it for generation.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
unlikely
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
Thanks to the original work of the UD dataset creators, the surface realisation dataset addresses a few languages which are possibly under-served in NLG: e.g. Arabic, Hindi, Indonesian, Korean.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
It is very likely that the distribution of language producers is not fully represented in the datasets of each language.
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
No risks foreseen.
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`multiple licenses`, `open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`multiple licenses`, `open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
The deep track inputs (predicate-argument structures) are not of perfect quality, they were derived automatically from gold or predicted syntactic parses using handcrafted grammars.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The datasets are probably not fitted to train tools to produce "unusual" languages (e.g. poetry, kid writing etc.).
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
To be thought of :)
| 26,959 | [
[
-0.040191650390625,
-0.05169677734375,
0.037811279296875,
0.01058197021484375,
-0.007061004638671875,
0.0005269050598144531,
-0.03131103515625,
-0.033111572265625,
0.021331787109375,
0.035675048828125,
-0.040496826171875,
-0.0667724609375,
-0.042755126953125,
... |
Intel/WEC-Eng | 2021-10-04T11:21:48.000Z | [
"region:us"
] | Intel | null | null | 0 | 85 | 2022-03-02T23:29:22 | # WEC-Eng
A large-scale dataset for cross-document event coreference extracted from English Wikipedia. </br>
- **Repository (Code for generating WEC):** https://github.com/AlonEirew/extract-wec
- **Paper:** https://aclanthology.org/2021.naacl-main.198/
### Languages
English
## Load Dataset
You can read in WEC-Eng files as follows (using the **huggingface_hub** library):
```json
from huggingface_hub import hf_hub_url, cached_download
import json
REPO_ID = "datasets/Intel/WEC-Eng"
splits_files = ["Dev_Event_gold_mentions_validated.json",
"Test_Event_gold_mentions_validated.json",
"Train_Event_gold_mentions.json"]
wec_eng = list()
for split_file in splits_files:
wec_eng.append(json.load(open(cached_download(
hf_hub_url(REPO_ID, split_file)), "r")))
```
## Dataset Structure
### Data Splits
- **Final version of the English CD event coreference dataset**<br>
- Train - Train_Event_gold_mentions.json
- Dev - Dev_Event_gold_mentions_validated.json
- Test - Test_Event_gold_mentions_validated.json
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Clusters | 7,042 | 233 | 322 |
| Event Mentions | 40,529 | 1250 | 1,893 |
- **The non (within clusters) controlled version of the dataset (lexical diversity)**<br>
- All (experimental) - All_Event_gold_mentions_unfiltered.json
### Data Instances
```json
{
"coref_chain": 2293469,
"coref_link": "Family Values Tour 1998",
"doc_id": "House of Pain",
"mention_context": [
"From",
"then",
"on",
",",
"the",
"members",
"continued",
"their"
],
"mention_head": "Tour",
"mention_head_lemma": "Tour",
"mention_head_pos": "PROPN",
"mention_id": "108172",
"mention_index": 1,
"mention_ner": "UNK",
"mention_type": 8,
"predicted_coref_chain": null,
"sent_id": 2,
"tokens_number": [
50,
51,
52,
53
],
"tokens_str": "Family Values Tour 1998",
"topic_id": -1
}
```
### Data Fields
|Field|Value Type|Value|
|---|:---:|---|
|coref_chain|Numeric|Coreference chain/cluster ID|
|coref_link|String|Coreference link wikipeida page/article title|
|doc_id|String|Mention page/article title|
|mention_context|List[String]|Tokenized mention paragraph (including mention)|
|mention_head|String|Mention span head token|
|mention_head_lemma|String|Mention span head token lemma|
|mention_head_pos|String|Mention span head token POS|
|mention_id|String|Mention id|
|mention_index|Numeric|Mention index in json file|
|mention_ner|String|Mention NER|
|tokens_number|List[Numeric]|Mentions tokens ids within the context|
|tokens_str|String|Mention span text|
|topic_id|Ignore|Ignore|
|mention_type|Ignore|Ignore|
|predicted_coref_chain|Ignore|Ignore|
|sent_id|Ignore|Ignore|
## Citation
```
@inproceedings{eirew-etal-2021-wec,
title = "{WEC}: Deriving a Large-scale Cross-document Event Coreference dataset from {W}ikipedia",
author = "Eirew, Alon and
Cattan, Arie and
Dagan, Ido",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.198",
doi = "10.18653/v1/2021.naacl-main.198",
pages = "2498--2510",
abstract = "Cross-document event coreference resolution is a foundational task for NLP applications involving multi-text processing. However, existing corpora for this task are scarce and relatively small, while annotating only modest-size clusters of documents belonging to the same topic. To complement these resources and enhance future research, we present Wikipedia Event Coreference (WEC), an efficient methodology for gathering a large-scale dataset for cross-document event coreference from Wikipedia, where coreference links are not restricted within predefined topics. We apply this methodology to the English Wikipedia and extract our large-scale WEC-Eng dataset. Notably, our dataset creation method is generic and can be applied with relatively little effort to other Wikipedia languages. To set baseline results, we develop an algorithm that adapts components of state-of-the-art models for within-document coreference resolution to the cross-document setting. Our model is suitably efficient and outperforms previously published state-of-the-art results for the task.",
}
```
## License
We provide the following data sets under a <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en_US">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
## Contact
If you have any questions please create a Github issue at https://github.com/AlonEirew/extract-wec. | 5,118 | [
[
-0.06170654296875,
-0.02587890625,
0.004261016845703125,
-0.00725555419921875,
-0.0201263427734375,
-0.0046844482421875,
-0.060791015625,
-0.044952392578125,
0.0272369384765625,
0.01104736328125,
-0.04327392578125,
-0.0751953125,
-0.054473876953125,
0.027130... |
Ishwar/Senti | 2021-10-31T10:03:41.000Z | [
"region:us"
] | Ishwar | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
JIsanan/war-ceb-wikipedia | 2021-10-24T01:48:04.000Z | [
"region:us"
] | JIsanan | null | null | 0 | 85 | 2022-03-02T23:29:22 | annotations_creators: []
language_creators:
- found
languages:
- war, ceb
licenses: []
multilinguality:
- multilingual
pretty_name: Waray Cebu Wikipedia
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: [] | 232 | [
[
-0.04156494140625,
-0.016571044921875,
0.021453857421875,
0.048065185546875,
-0.0255126953125,
0.0178070068359375,
-0.01284027099609375,
-0.0229644775390625,
0.047271728515625,
0.0643310546875,
-0.03741455078125,
-0.044097900390625,
-0.044921875,
0.027908325... |
LysandreJik/push-to-hub | 2021-10-07T22:59:15.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
LysandreJik/pushe-to-hub | 2021-10-07T23:23:34.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
LysandreJik/pushedd-to-hub | 2021-10-07T23:25:03.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
LysandreJik/random_repo | 2021-11-23T13:04:30.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
LysandreJik/test-16336477963335 | 2021-10-07T23:03:17.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16336478042515 | 2021-10-07T23:03:25.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16336479967338 | 2021-10-07T23:06:38.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16336480189315 | 2021-10-07T23:07:00.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16340052901609 | 2021-10-12T02:21:31.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344349332219 | 2021-10-17T01:42:14.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344349440339 | 2021-10-17T01:42:25.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344360501144 | 2021-10-17T02:00:51.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344361893586 | 2021-10-17T02:03:11.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344362261113 | 2021-10-17T02:03:47.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344362895458 | 2021-10-17T02:04:51.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344364230608 | 2021-10-17T02:07:04.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344364547167 | 2021-10-17T02:07:36.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344367190179 | 2021-10-17T02:12:00.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/test-16344368182003 | 2021-10-17T02:13:40.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
JonathanSum/en_corpora_parliament_processed | 2022-02-22T17:18:03.000Z | [
"region:us"
] | JonathanSum | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Karavet/ILUR-news-text-classification-corpus | 2022-10-21T16:06:12.000Z | [
"task_categories:text-classification",
"multilinguality:monolingual",
"language:hy",
"license:apache-2.0",
"region:us"
] | Karavet | null | null | 0 | 85 | 2022-03-02T23:29:22 | ---
language:
- hy
task_categories: [news-classification, text-classification]
multilinguality: [monolingual]
task_ids: [news-classification, text-classification]
license:
- apache-2.0
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [News Texts Dataset](#news-texts-dataset)
## News Texts Dataset
We release a dataset of over 12000 news articles from [iLur.am](http://www.ilur.am/), categorized into 7 classes: sport, politics, weather, economy, accidents, art, society. The articles are split into train (2242k tokens) and test sets (425k tokens).
For more details, refer to the [paper](https://arxiv.org/ftp/arxiv/papers/1906/1906.03134.pdf). | 669 | [
[
-0.0303802490234375,
-0.0251312255859375,
0.01435089111328125,
0.014617919921875,
-0.02423095703125,
0.00948333740234375,
0.004673004150390625,
-0.021881103515625,
0.00768280029296875,
0.050750732421875,
-0.01020050048828125,
-0.047576904296875,
-0.0482788085937... |
Mcy/random_uselesstestsequence | 2021-12-09T08:54:02.000Z | [
"region:us"
] | Mcy | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Mulin/my_third_dataset | 2021-09-19T01:36:15.000Z | [
"region:us"
] | Mulin | null | null | 0 | 85 | 2022-03-02T23:29:22 | My third Dataset
- for wolf classification | 42 | [
[
-0.03680419921875,
0.0268402099609375,
0.028045654296875,
0.030059814453125,
0.0113525390625,
-0.0184478759765625,
0.01352691650390625,
-0.03631591796875,
0.0048980712890625,
0.040802001953125,
-0.0313720703125,
-0.037384033203125,
-0.03076171875,
-0.0016908... |
Narsil/conversational_dummy | 2021-08-19T08:51:34.000Z | [
"region:us"
] | Narsil | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
NbAiLab/NPSC_test | 2022-11-07T12:37:31.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:2G<n<1B",
"source_datasets:original",
"language:nb",
"language:no",
"language:nn",
"license:cc0... | NbAiLab | null | null | 0 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- nb
- 'no'
- nn
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speech-modeling
pretty_name: NPSC
tags:
- speech-modeling
---
# Dataset Card for NBAiLab/NPSC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:per.solberg@nb.no)
The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models. The corpus is created by Språkbanken at the National Library in Norway.
NPSC is based on sound recording from meeting in the Norwegian Parliament. These talks are orthographically transcribed to either Norwegian Bokmål or Norwegian Nynorsk. In addition to the data actually included in this dataset, there is a significant amount of metadata that is included in the original corpus. Through the speaker id there is additional information about the speaker, like gender, age, and place of birth (ie dialect). Through the proceedings id the corpus can be linked to the official proceedings from the meetings.
The corpus is in total sound recordings from 40 entire days of meetings. This amounts to 140 hours of speech, 65,000 sentences or 1.2 million words.
This corpus is an adaption of the original corpus made for efficiant ASR training. For simplicity and portability, a few of the original datasets features, like the token transcription, is ommitted. You can find the complete dataset at [the Resource Catalogue at Språkbanken](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/).
## How to Use (This needs to be edited of course)
```python
from datasets import load_dataset
data = load_dataset("nb/NPSC", streaming=True)
```
## Data Fields
Currently there are two versions included in this repo.
### Version A
This verison has a short list of the metadata and includes the audio (48k mp3) encoded as a float32 array in the dataset itself.
The current dataloader script is associated with this version.
One line in train.json looks like this:
```json
{
"sentence_id": 7309,
"sentence_order": 0,
"speaker_id": 1,
"speaker_name": "Marit Nybakk",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 302650,
"end_time": 306000,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {
"path": "audio/20170207-095506_302650_306000.wav",
"array": [
24,
25,
50,
(...)
],
"sampling_rate": 48000
}
}
```
### Version B
This verison does not contain the audio encoded in the dataset. Instead it has the audio files placed in sub-directories. There are currently both samples in clips_48k_wav and clips_16k_mp3. Only the base filename is referred in the dataset. Please not that there are both sentence-based audio clips as well at meeting-based audio clips. The dataset contains referrals to both, the latter referral has start and stop time as well.
One line in the train/metadata.json looks like this:
```json
{
"meeting_date": "20170207",
"full_audio_file": "20170207-095506",
"proceedings_file": "20170207-095506.ref",
"duration": 4442474,
"transcriber_id": 1,
"reviewer_id": 2,
"data_split": "test",
"speaker_name": "Marit Nybakk",
"speaker_id": 1,
"sentence_id": 7309,
"sentence_language_code": "nb-NO",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_order": 0,
"audio_file": "20170207-095506_302650_306000",
"start_time": 302650,
"end_time": 306000,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1
}
```
### Dataset Creation
We are providing a **train**, **dev** and **test** split. These are the same as in the orginal corpus.
Build date: 20012022
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in the paper.
## Statistics
| Feature | Value |
|:---------|-----------:|
| Duration, pauses included | 140,3 hours|
| Duration, pauses not included | 125,7 hours |
| Word count | 1,2 million |
| Sentence count | 64.531 |
| Language distribution | Nynorsk: 12,8%|
| | Bokmål: 87,2%%|
| Gender distribution | Female: 38,3% |
| | Male: 61.7% |
## Considerations for Using the Data
This corpus contains speech data and is allowed to be used outside the National Library of Norway for speech recognition technology purposes.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
[Per Erik Solberg](mailto:per.solberg@nb.no)
[Freddy Wetjen](mailto:Freddy.wetjen@nb.no), [Andre Kaasen](mailto:andre.kasen@nb.no) and [Per Egil Kummervold](mailto:per.kummervold@nb.no) has contributed to porting it to the Hugging Face Dataset format.
### Licensing Information
Licensed for use outside the National Library of Norway.
## License
CC-ZERO(https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
ANDRE: TO BE DONE
```
| 6,338 | [
[
-0.032012939453125,
-0.029144287109375,
0.002628326416015625,
0.024932861328125,
-0.0196685791015625,
-0.00434112548828125,
-0.035430908203125,
-0.031280517578125,
0.037841796875,
0.03271484375,
-0.044464111328125,
-0.051239013671875,
-0.0421142578125,
0.009... |
PaulLerner/triviaqa_for_viquae | 2022-08-02T08:24:26.000Z | [
"region:us"
] | PaulLerner | null | null | 0 | 85 | 2022-03-02T23:29:22 | See https://github.com/PaulLerner/ViQuAE
Get the original dataset there: http://nlp.cs.washington.edu/triviaqa/ (or via HF: https://huggingface.co/datasets/trivia_qa) | 166 | [
[
-0.02655029296875,
-0.05194091796875,
0.039642333984375,
0.0014448165893554688,
-0.0016307830810546875,
0.01523590087890625,
0.0100860595703125,
-0.01471710205078125,
0.07086181640625,
0.050445556640625,
-0.04632568359375,
-0.04290771484375,
-0.00215530395507812... |
Recognai/ag_news_corrected_labels | 2021-12-29T17:00:24.000Z | [
"region:us"
] | Recognai | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Recognai/corrected_labels_ag_news | 2021-12-29T16:57:56.000Z | [
"region:us"
] | Recognai | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Recognai/veganuary | 2022-02-04T10:07:21.000Z | [
"region:us"
] | Recognai | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Sabokou/qg_squad_modified_dev | 2021-12-30T10:35:48.000Z | [
"region:us"
] | Sabokou | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
SaulLu/toy_struc_dataset | 2021-09-22T12:26:40.000Z | [
"region:us"
] | SaulLu | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Tahsin-Mayeesha/Bengali-SQuAD | 2022-10-25T09:06:50.000Z | [
"task_categories:question-answering",
"multilinguality:monolingual",
"language:bn",
"region:us"
] | Tahsin-Mayeesha | null | null | 0 | 85 | 2022-03-02T23:29:22 | ---
language:
- bn
multilinguality:
- monolingual
task_categories:
- question-answering
---
# Overview
This dataset contains the data for the paper [Deep learning based question answering system in Bengali](https://www.tandfonline.com/doi/full/10.1080/24751839.2020.1833136). It is a translated version of [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset to bengali language. Preprocessing details can be found in the paper. | 442 | [
[
-0.01137542724609375,
-0.046875,
0.01200103759765625,
0.0157928466796875,
-0.016204833984375,
0.031005859375,
0.011749267578125,
-0.026214599609375,
0.006687164306640625,
0.04071044921875,
-0.07257080078125,
-0.0245361328125,
-0.0255126953125,
0.009254455566... |
Zoe10/ner_dataset | 2021-12-14T11:13:54.000Z | [
"region:us"
] | Zoe10 | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
abidlabs/crowdsourced-speech-demo | 2022-04-28T08:13:52.000Z | [
"region:us"
] | abidlabs | null | null | 1 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
abidlabs/crowdsourced-speech2 | 2022-01-21T15:44:22.000Z | [
"region:us"
] | abidlabs | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
abidlabs/crowdsourced-speech5 | 2022-01-21T16:38:34.000Z | [
"region:us"
] | abidlabs | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
abidlabs/crowdsourced-speech7 | 2022-01-21T17:21:47.000Z | [
"region:us"
] | abidlabs | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
abidlabs/test-audio-1 | 2022-01-19T16:26:19.000Z | [
"region:us"
] | abidlabs | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
abidlabs/test-audio-13 | 2022-01-21T16:42:41.000Z | [
"region:us"
] | abidlabs | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
abidlabs/test-image-13 | 2022-01-19T19:33:09.000Z | [
"region:us"
] | abidlabs | null | null | 1 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
abidlabs/test-image-classifier-dataset | 2021-12-23T19:41:31.000Z | [
"region:us"
] | abidlabs | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
addy88/sanskrit-asr-84 | 2021-12-14T13:39:37.000Z | [
"region:us"
] | addy88 | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
akumar33/manufacturing | 2021-10-14T04:51:48.000Z | [
"region:us"
] | akumar33 | null | null | 1 | 85 | 2022-03-02T23:29:22 | This dataset is associated with FabNER paper. https://link.springer.com/article/10.1007/s10845-021-01807-x
Kindly cite if you use it. | 133 | [
[
-0.017791748046875,
-0.01142120361328125,
0.0079803466796875,
0.025115966796875,
0.00016796588897705078,
-0.03778076171875,
0.02362060546875,
-0.0065460205078125,
0.034210205078125,
0.0528564453125,
-0.05792236328125,
-0.0244293212890625,
-0.040496826171875,
... |
alperbayram/Tweet_Siniflandirma | 2022-10-25T10:02:12.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"size_categories:unknown",
"language:tr",
"region:us"
] | alperbayram | null | null | 0 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
language:
- tr
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Turkish Sentiment Dataset
---
# References
- [alper bayram](https://github.com/alperbayram)
| 339 | [
[
-0.0214080810546875,
-0.0231170654296875,
0.04217529296875,
0.01490020751953125,
-0.016357421875,
-0.0011844635009765625,
0.01331329345703125,
-0.002407073974609375,
0.00899505615234375,
0.040557861328125,
-0.0289459228515625,
-0.050811767578125,
-0.022979736328... |
bhavnicksm/sentihood | 2022-10-25T09:07:23.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:multi-class-classification",
"task_ids:natural-language-inference",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:1610.03771",
... | bhavnicksm | null | null | 3 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: SentiHood Dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- multi-class-classification
- natural-language-inference
---
# Dataset Card for [SentiHood]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** https://arxiv.org/abs/1610.03771
- **Leaderboard:** https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-sentihood
### Dataset Summary
Created as a part of the paper "SentiHood: Targeted Aspect Based Sentiment Analysis Dataset for Urban Neighbourhoods" by Saeidi et al.
#### Abstract
In this paper, we introduce the task of targeted aspect-based sentiment analysis. The goal is to extract fine-grained information with respect to entities mentioned in user comments. This work extends both aspect-based sentiment analysis that assumes a single entity per document and targeted sentiment analysis that assumes a single sentiment towards a target entity. In particular, we identify the sentiment towards each aspect of one or more entities. As a testbed for this task, we introduce the SentiHood dataset, extracted from a question answering (QA) platform where urban neighborhoods are discussed by users. In this context units of text often mention several aspects of one or more neighborhoods. This is the first time that a generic social media platform in this case a QA platform, is used for fine-grained opinion mining. Text coming from QA platforms is far less constrained compared to text from review-specific platforms on which current datasets are based. We develop several strong baselines, relying on logistic regression and state-of-the-art recurrent neural networks.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Monolingual (only English)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@Bhavnicksm](https://github.com/Bhavnicksm) for adding this dataset. | 4,085 | [
[
-0.0291595458984375,
-0.06024169921875,
0.028045654296875,
0.017791748046875,
-0.01081085205078125,
0.0107879638671875,
-0.00794219970703125,
-0.0162200927734375,
0.0413818359375,
0.060455322265625,
-0.057891845703125,
-0.080322265625,
-0.032867431640625,
-0... |
castorini/msmarco_v1_doc_segmented_doc2query-t5_expansions | 2021-11-10T04:51:35.000Z | [
"language:English",
"license:Apache License 2.0",
"region:us"
] | castorini | null | null | 0 | 85 | 2022-03-02T23:29:22 | ---
language:
- English
license: "Apache License 2.0"
---
# Dataset Summary
The repo provides queries generated for the MS MARCO V1 document segmented corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model.
# Dataset Structure
All three folds (train, dev and test) share the same corpus.
An example data entry looks as follows:
```
{
"id": "D1555982#0", "predicted_queries": ["when find radius of star r", "what is r radius", "how to find out radius of star", "what is radius r", "what is radius of r", "how do you find radius of star igel", "which law states that radiation is proportional to radiation?", "what is the radius of a spherical star", "what is the radius of the star", "what is radius of star"]
}
```
# Load Dataset
An example to load the dataset:
```
dataset = load_dataset('castorini/msmarco_v1_doc_segmented_doc2query-t5_expansions')
```
# Citation Information
```
@article{docTTTTTquery,
title={From doc2query to {docTTTTTquery}},
author={Nogueira, Rodrigo and Lin, Jimmy},
year={2019}
}
@article{emdt5,
author = "Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin",
title = "The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models",
journal = "arXiv:2101.05667",
year = 2021,
}
| 1,749 | [
[
-0.025054931640625,
-0.049102783203125,
0.040740966796875,
-0.002208709716796875,
-0.0175323486328125,
-0.003360748291015625,
-0.006404876708984375,
-0.0261993408203125,
0.00731658935546875,
0.060943603515625,
-0.049957275390625,
-0.06005859375,
-0.0385437011718... |
chenghao/scielo_books | 2022-07-01T18:34:59.000Z | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:pt",
"language:es",
"license:cc-by-nc-sa-3.0",
"region:us"
] | chenghao | null | null | 0 | 85 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- pt
- es
license:
- cc-by-nc-sa-3.0
multilinguality:
- multilingual
paperswithcode_id: null
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
## Dataset Description
- **Homepage:** [scielo.org](https://search.livros.scielo.org/search/?fb=&where=BOOK&filter%5Bis_comercial_filter%5D%5B%5D=f)
### Dataset Summary
This dataset contains all text from open-access PDFs on [scielo.org](https://search.livros.scielo.org/search/?fb=&where=BOOK&filter%5Bis_comercial_filter%5D%5B%5D=f). As of Dec. 5 2021, the total number of books available is 962. Some of them are not in native PDF format (e.g. scanned images) though.
### Supported Tasks and Leaderboards
- `sequence-modeling` or `language-modeling`: The dataset can be used to train a language model.
### Languages
As of Dec. 5 2021, there are 902 books in Portuguese, 55 in Spanish, and 5 in English.
## Dataset Structure
### Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
```
{
"sbid":"23pcw",
"id":"23pcw",
"shortname":"",
"title":"Educa\u00e7\u00e3o, sa\u00fade e esporte: novos\tdesafios \u00e0 Educa\u00e7\u00e3o F\u00edsica",
"eisbn":"9788574554907",
"isbn":"9788574554273",
"author":"Farias, Gelcemar Oliveira; Nascimento, Juarez Vieira do",
"corporate_authors":"",
"translators":"",
"coordinators":"",
"editors":"",
"others":"",
"organizers":"",
"collaborators":"",
"publisher":"Editus",
"language":"pt",
"year": 2016,
"synopsis":"\"A colet\u00e2nea contempla cap\u00edtulos que discutem a Educa\u00e7\u00e3o F\u00edsica a partir dos pressupostos da Educa\u00e7\u00e3o, da Sa\u00fade e do Esporte, enquanto importante desafio do momento atual e diante dos avan\u00e7os e das mudan\u00e7as que se consolidaram na forma\u00e7\u00e3o inicial em Educa\u00e7\u00e3o F\u00edsica. A obra convida a todos para a realiza\u00e7\u00e3o de futuras investiga\u00e7\u00f5es, no sentido de concentrar esfor\u00e7os para o fortalecimento de n\u00facleos de estudos e a sistematiza\u00e7\u00e3o de linhas de pesquisa.\"",
"format":"",
"type":"book",
"is_public":"true",
"is_comercial":"false",
"publication_date":"2018-11-07",
"_version_":"1718206093473087488",
"pdf_url":"http://books.scielo.org//id/23pcw/pdf/farias-9788574554907.pdf",
"pdf_filename":"farias-9788574554907.pdf",
"metadata_filename":"farias-9788574554907.json",
"text":"..."
}
```
### Data Fields
All fields are of string type except `year`.
### Data Splits
All records are in the default `train` split.
## Dataset Creation
### Curation Rationale
Part of the big science efforts to create lanague modeling datasets.
### Source Data
[scielo.org](https://search.livros.scielo.org/search/?fb=&where=BOOK&filter%5Bis_comercial_filter%5D%5B%5D=f)
#### Initial Data Collection and Normalization
All PDFs are directly downloaded from the website and text is extracted with [pdftotext](https://pypi.org/project/pdftotext/) lib.
#### Who are the source language producers?
NA
### Annotations
No annotation is available.
#### Annotation process
NA
#### Who are the annotators?
NA
### Personal and Sensitive Information
NA
## Considerations for Using the Data
### Social Impact of Dataset
NA
### Discussion of Biases
NA
### Other Known Limitations
NA
## Additional Information
### Dataset Curators
[@chenghao](https://huggingface.co/chenghao)
### Licensing Information
Provide the license and link to the license webpage if available.
[CC BY-NC-SA 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0/)
### Contributions
NA | 3,824 | [
[
-0.01541900634765625,
-0.0242919921875,
0.01409149169921875,
0.0195770263671875,
-0.00910186767578125,
-0.00917816162109375,
-0.00560760498046875,
-0.01541900634765625,
0.0141448974609375,
0.045654296875,
-0.03594970703125,
-0.06744384765625,
-0.038055419921875,... |
dataset/wikipedia_bn | 2021-06-04T16:22:44.000Z | [
"region:us"
] | dataset | Bengali Wikipedia from the dump of 03/20/2021.
The data was processed using the huggingface datasets wikipedia script early april 2021.
The dataset was built from the Wikipedia dump (https://dumps.wikimedia.org/).
Each example contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | 1 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.057159423828125,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.0379638... |
davanstrien/beyond_test | 2022-02-20T14:20:55.000Z | [
"region:us"
] | davanstrien | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.057159423828125,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.0379638... |
davanstrien/embellishments | 2022-01-10T16:59:02.000Z | [
"region:us"
] | davanstrien | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
davanstrien/test_iiif | 2022-01-08T14:52:27.000Z | [
"region:us"
] | davanstrien | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
davanstrien/testpush | 2022-01-05T20:30:22.000Z | [
"region:us"
] | davanstrien | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
dvilasuero/test-dataset | 2021-12-29T14:53:03.000Z | [
"region:us"
] | dvilasuero | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
flxclxc/encoded_drug_reviews | 2022-02-04T14:25:31.000Z | [
"region:us"
] | flxclxc | null | null | 3 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
gcaillaut/pubmed | 2021-10-21T15:39:47.000Z | [
"region:us"
] | gcaillaut | The Pubmed Diabetes dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. The citation network consists of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word vector from a dictionary which consists of 500 unique words. The README file in the dataset provides more details. | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
gfigueroa/wikitext_processed | 2022-01-19T18:16:40.000Z | [
"region:us"
] | gfigueroa | null | null | 0 | 85 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
DDSC/dagw_reddit_filtered_v1.0.0 | 2022-11-06T15:30:56.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:DDSC/partial-danish-gigaword-no-twitter",
"source_datasets:DDSC/reddit-da",
"language:da... | DDSC | null | null | 1 | 85 | 2022-05-11T13:46:39 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- DDSC/partial-danish-gigaword-no-twitter
- DDSC/reddit-da
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Danish Gigaword Corpus, Reddit (filtered)
language_bcp47:
- da
- da-bornholm
- da-synnejyl
---
# Danish Gigaword Corpus, Reddit (filtered)
*Version*: 1.0.0
*License*: See the respective dataset
This dataset is a variant of the Danish Gigaword [3], which excludes the sections containing
tweets and modified news contained in danavis20.
Twitter was excluded as it was a sample of a dataset which was available to the authors only.
DanAvis20 (or danavis) was excluded due to preprocessing described in [3] (version 1 on
[arxiv](https://arxiv.org/pdf/2005.03521v1.pdf))including shuffling of sentences,
pseudonymization of proper nouns and the replacement of infrequent content-words with
statistical cognates, which could lead to sentences such as *"Der er skilsmissesager i
forsikringsselskabet"*.
Additionally this dataset includes the [reddit-da](https://huggingface.co/datasets/DDSC/reddit-da) dataset, which includes
1,908,887 documents. This dataset has had low-quality text removed using a series
of heuristic filters. Following filtering,
DAGW$_{DFM}$ is deduplicated to remove exact and near-duplicates. For more on data
cleaning, see the section on post-processing.
This dataset included 1,310,789,818 tokens before filtering, and 833,664,528 (0.64%) after.
# Dataset information
This is a composite dataset consisting of Danish Gigaword and
[reddit-da](https://huggingface.co/datasets/DDSC/reddit-da). Thus it does not contains its own documentation. For more information, we recommend checking the documentation of the
respective datasets.
### Motivation:
**For what purpose was the dataset created? Who created the dataset? Who funded the
creation of the dataset?**
This dataset was created with the purpose of pre-training Danish language models. It was created by a team of
researchers at the Center for Humanities Computing Aarhus (CHCAA) using a codebase jointly
developed with partners from industry and academia, e.g. KMD, Ekstra Bladet, deepdivr,
and Bristol University. For more on collaborators on this project see
the [GitHub repository](https://github.com/centre-for-humanities-computing/danish-foundation-models
).
## Processing
### Quality Filter:
DAGW$_{DFM}$ applies a filter akin to [2]. It keeps documents that:
- Contain at least 2 Danish stopwords. For the stopword list, we use the one used in
SpaCy v.3.1.4.
- Have a mean word length between 3 and 10.
- Have a token length between 50 and 100,000.
- Contain fewer than 5,000,000 characters.
- Among all words, at least 60% have at least one alphabetic character.
- Have a symbol-to-word ratio lower than 10% for hashtags and ellipsis.
- Have fewer than 90% of lines starting with a bullet point.
- Have fewer than 30% of lines ending with an ellipsis.
- Have a low degree of repetitious text:
- Fewer than 30% duplicate lines.
- Fewer than 30% duplicate paragraphs.
- Fewer than 30% of characters are contained within duplicate lines.
- The top 2-4 grams constitute less than 20%, 18%, and 16% of characters, respectively.
- Where, for each document, 5-10 grams which occur more than once, constitute less than 15%, 14%, 13%, 12%, 11%, and 10% of
the characters, respectively.
### Deduplication
The deduplication removed all documents with a 13-gram similarity higher than 80%
following the MinHash algorithm [1] using 128 permutations. The MinHash algorithm is a
probabilistic data structure for approximating the Jaccard similarity between two sets.
# References:
- [1] Broder, Andrei Z. "On the resemblance and containment of documents."
Proceedings. Compression and Complexity of SEQUENCES 1997
(Cat. No. 97TB100171). IEEE, 1997.
- [2] Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F.,
Aslanides, J., Henderson, S., Ring, R., Young, S., Rutherford, E., Hennigan,
T., Menick, J., Cassirer, A., Powell, R., Driessche, G. van den, Hendricks,
L. A., Rauh, M., Huang, P.-S., … Irving, G. (2021).
Scaling Language Models: Methods, Analysis & Insights from Training Gopher.
https://arxiv.org/abs/2112.11446v2
- [3] Strømberg-Derczynski, L., Ciosici, M., Baglini, R., Christiansen, M. H.,
Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A.,
Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Madsen, J., Petersen, M. L.,
Rystrøm, J. H., & Varab, D. (2021). The Danish Gigaword corpus. Proceedings of the
23rd Nordic Conference on Computational Linguistics (NoDaLiDa), 413–421.
https://aclanthology.org/2021.nodalida-main.46
### Citation
If you wish to cite this work, please see the GitHub page for an up-to-date citation:
https://github.com/centre-for-humanities-computing/danish-foundation-models
| 5,056 | [
[
-0.041900634765625,
-0.056488037109375,
0.0291595458984375,
0.020263671875,
-0.033477783203125,
-0.0021533966064453125,
-0.0243682861328125,
-0.0295867919921875,
0.036712646484375,
0.043243408203125,
-0.0274658203125,
-0.059295654296875,
-0.05108642578125,
0... |
WorkInTheDark/FairytaleQA | 2023-08-22T18:49:30.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"education",
"children education",
"region:us"
] | WorkInTheDark | FairytaleQA dataset, an open-source dataset focusing on comprehension of narratives, targeting students from kindergarten to eighth grade. The FairytaleQA dataset is annotated by education experts based on an evidence-based theoretical framework. It consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations. | @inproceedings{xu-etal-2022-fantastic,
title = "Fantastic Questions and Where to Find Them: {F}airytale{QA} {--} An Authentic Dataset for Narrative Comprehension",
author = "Xu, Ying and
Wang, Dakuo and
Yu, Mo and
Ritchie, Daniel and
Yao, Bingsheng and
Wu, Tongshuang and
Zhang, Zheng and
Li, Toby and
Bradford, Nora and
Sun, Branda and
Hoang, Tran and
Sang, Yisi and
Hou, Yufang and
Ma, Xiaojuan and
Yang, Diyi and
Peng, Nanyun and
Yu, Zhou and
Warschauer, Mark",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.34",
doi = "10.18653/v1/2022.acl-long.34",
pages = "447--460",
abstract = "Question answering (QA) is a fundamental means to facilitate assessment and training of narrative comprehension skills for both machines and young children, yet there is scarcity of high-quality QA datasets carefully designed to serve this purpose. In particular, existing datasets rarely distinguish fine-grained reading skills, such as the understanding of varying narrative elements. Drawing on the reading education research, we introduce FairytaleQA, a dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Generated by educational experts based on an evidence-based theoretical framework, FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations. Our dataset is valuable in two folds: First, we ran existing QA models on our dataset and confirmed that this annotation helps assess models{'} fine-grained learning skills. Second, the dataset supports question generation (QG) task in the education domain. Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.",
} | 1 | 85 | 2022-05-18T19:11:00 | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- education
- children education
---
# Dataset Card for FairytaleQA
## Dataset Description
- **Homepage:**
- **Repository:**
https://github.com/uci-soe/FairytaleQAData
https://github.com/WorkInTheDark/FairytaleQA_Dataset
- **Paper:**
https://aclanthology.org/2022.acl-long.34/
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is the repository for the FairytaleQA dataset, an open-source dataset focusing on comprehension of narratives, targeting students from kindergarten to eighth grade. The FairytaleQA dataset is annotated by education experts based on an evidence-based theoretical framework. It consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations.
### Supported Tasks and Leaderboards
Question-Answering, Question-Generation, Question-Answer Pair Generation
### Languages
English
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```
{
'story_name': 'three-dogs',
'story_section': 'once upon a time there was a king who went forth into the world and
... ...
guards to watch over the little princess so that she would not get out under the open sky .',
'question': 'why was there great rejoicing in the city and throughout the country ?',
'answer1': 'the people wished their king all that was good .',
'answer2': '',
'local-or-sum': 'local',
'attribute': 'causal relationship',
'ex-or-im': 'explicit',
'ex-or-im2': '',
}
```
### Data Fields
- **'story_name'**: story name
- **'story_section'**: story section related to the QA-pair
- **'question'**: the question content
- **'answer1'**: the 1st answer (available in all splits)
- **'answer2'**: the 2nd answer by another annotator (only available in test / val splits)
- **'local-or-sum'**: 'local' denotes the question is related to only one story section, while 'summary' denotes the question is related to multiple story sections
- **'attribute'**: categorized by education experts into seven narrative elements: character / setting / action / feeling / causal relationship / outcome resolution, detailed definition is described in the paper
- **'ex-or-im'**: 'explicit' denotes the answer can be found in the story content, while 'implicit' denotes the answer require high-level summarization
- **'ex-or-im2'**: similar to 'ex-or-im', but annotated by another annotator (only available in storys in test / val splits)
### Data Splits
- train split: 232 books with 8548 QA-pairs
- val split: 23 books with 1025 QA-pairs
- test split: 23 books with 1007 QA-pairs
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
Our Dataset Paper is accepted to ACL 2022, you may cite:
```
@inproceedings{xu2022fairytaleqa,
author={Xu, Ying and Wang, Dakuo and Yu, Mo and Ritchie, Daniel and Yao, Bingsheng and Wu, Tongshuang and Zhang, Zheng and Li, Toby Jia-Jun and Bradford, Nora and Sun, Branda and Hoang, Tran Bao and Sang, Yisi and Hou, Yufang and Ma, Xiaojuan and Yang, Diyi and Peng, Nanyun and Yu, Zhou and Warschauer, Mark},
title = {Fantastic Questions and Where to Find Them: Fairytale{QA} -- An Authentic Dataset for Narrative Comprehension},
publisher = {Association for Computational Linguistics},
year = {2022}
}
```
### Contributions
[More Information Needed] | 4,225 | [
[
-0.0328369140625,
-0.065185546875,
0.0118255615234375,
0.0229949951171875,
-0.005222320556640625,
0.0028629302978515625,
0.01396942138671875,
-0.038848876953125,
0.0200958251953125,
0.040771484375,
-0.0648193359375,
-0.0462646484375,
-0.020782470703125,
0.01... |
GroNLP/divemt | 2023-02-10T11:04:33.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:it",
"language:vi",
"language:nl",
"langu... | GroNLP | DivEMT is the first publicly available post-editing study of Neural Machine Translation (NMT) over a typologically diverse set of target languages. Using a strictly controlled setup, 18 professional translators were instructed to translate or post-edit the same set of English documents into Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process, their edits, keystrokes, editing times, pauses, and perceived effort were logged, enabling an in-depth, cross-lingual evaluation of NMT quality and its post-editing process. | @inproceedings{sarti-etal-2022-divemt,
title = "{D}iv{EMT}: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages",
author = "Sarti, Gabriele and Bisazza, Arianna and Guerberof Arenas, Ana and Toral, Antonio",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.532",
pages = "7795--7816",
} | 2 | 85 | 2022-05-23T19:56:55 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
- it
- vi
- nl
- uk
- tr
- ar
license:
- gpl-3.0
multilinguality:
- translation
pretty_name: divemt
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
---
# Dataset Card for DivEMT
*For more details on DivEMT, see our [EMNLP 2022 Paper](https://arxiv.org/abs/2205.12215) and our [Github repository](https://github.com/gsarti/divemt)*
## Dataset Description
- **Source:** [Github](https://github.com/gsarti/divemt)
- **Paper:** [Arxiv](https://arxiv.org/abs/2205.12215)
- **Point of Contact:** [Gabriele Sarti](mailto:g.sarti@rug.nl)
[Gabriele Sarti](https://gsarti.com) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Antonio Toral](https://antoniotor.al/)
<img src="https://huggingface.co/datasets/GroNLP/divemt/resolve/main/divemt.png" alt="DivEMT annotation pipeline" width="600"/>
>We introduce DivEMT, the first publicly available post-editing study of Neural Machine Translation (NMT) over a typologically diverse set of target languages. Using a strictly controlled setup, 18 professional translators were instructed to translate or post-edit the same set of English documents into Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process, their edits, keystrokes, editing times and pauses were recorded, enabling an in-depth, cross-lingual evaluation of NMT quality and post-editing effectiveness. Using this new dataset, we assess the impact of two state-of-the-art NMT systems, Google Translate and the multilingual mBART-50 model, on translation productivity. We find that post-editing is consistently faster than translation from scratch. However, the magnitude of productivity gains varies widely across systems and languages, highlighting major disparities in post-editing effectiveness for languages at different degrees of typological relatedness to English, even when controlling for system architecture and training data size. We publicly release the complete dataset including all collected behavioral data, to foster new research on the translation capabilities of NMT systems for typologically diverse languages.
### Dataset Summary
This dataset contains the processed `warmup` and `main` splits of the DivEMT dataset. A sample of documents extracted from the Flores-101 corpus were either translated from scratch or post-edited from an existing automatic translation by a total of 18 professional translators across six typologically diverse languages (Arabic, Dutch, Italian, Turkish, Ukrainian, Vietnamese). During the translation, behavioral data (keystrokes, pauses, editing times) were collected using the [PET](https://github.com/wilkeraziz/PET) platform.
We publicly release the processed dataset including all collected behavioural data, to foster new research on the ability of state-of-the-art NMT systems to generate text in typologically diverse languages.
### News 🎉
**February, 2023**: The DivEMT dataset now contains linguistic annotations (`*_annotations` fields) computed with Stanza and word-level quality estimation tags (`src_wmt22_qe`, `mt_wmt22_qe`) obtained using the same scripts adopted for the WMT22 QE Task 2.
### Languages
The language data of DivEMT is in English (BCP-47 `en`), Italian (BCP-47 `it`), Dutch (BCP-47 `nl`), Arabic (BCP-47 `ar`), Turkish (BCP-47 `tr`), Ukrainian (BCP-47 `uk`) and Vietnamese (BCP-47 `vi`)
## Dataset Structure
### Data Instances
The dataset contains two configurations: `main` and `warmup`. `main` contains the full data collected during the main task and analyzed during our experiments. `warmup` contains the data collected in the verification phase, before the main task begins.
### Data Fields
The following fields are contained in the training set:
|Field|Description|
|-----|-----------|
|`unit_id` | The full entry identifier. Format: `flores101-{config}-{lang}-{doc_id}-{modality}-{sent_in_doc_num}` |
|`flores_id` | Index of the sentence in the original [Flores-101](https://huggingface.co/datasets/gsarti/flores_101) dataset |
|`item_id` | The sentence identifier. The first digits of the number represent the document containing the sentence, while the last digit of the number represents the sentence position inside the document. Documents can contain from 3 to 5 contiguous sentences each. |
|`subject_id` | The identifier for the translator performing the translation from scratch or post-editing task. Values: `t1`, `t2` or `t3`. |
|`lang_id` | Language identifier for the sentence, using Flores-101 three-letter format (e.g. `ara`, `nld`)|
|`doc_id` | Document identifier for the sentence |
|`task_type` | The modality of the translation task. Values: `ht` (translation from scratch), `pe1` (post-editing Google Translate translations), `pe2` (post-editing [mBART 1-to-50](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) translations). |
|`translation_type` | Either `ht` for from scratch or `pe` for post-editing |
|`src_len_chr` | Length of the English source text in number of characters |
|`mt_len_chr` | Length of the machine translation in number of characters (NaN for ht) |
|`tgt_len_chr` | Length of the target text in number of characters |
|`src_len_wrd` | Length of the English source text in number of words |
|`mt_len_wrd` | Length of the machine translation in number of words (NaN for ht) |
|`tgt_len_wrd` | Length of the target text in number of words |
|`edit_time` | Total editing time for the translation in seconds. |
|`k_total` | Total number of keystrokes for the translation. |
|`k_letter` | Total number of letter keystrokes for the translation. |
|`k_digit` | Total number of digit keystrokes for the translation. |
|`k_white` | Total number of whitespace keystrokes for the translation. |
|`k_symbol` | Total number of symbol (punctuation, etc.) keystrokes for the translation. |
|`k_nav` | Total number of navigation keystrokes (left-right arrows, mouse clicks) for the translation. |
|`k_erase` | Total number of erase keystrokes (backspace, cancel) for the translation. |
|`k_copy` | Total number of copy (Ctrl + C) actions during the translation. |
|`k_cut` | Total number of cut (Ctrl + X) actions during the translation. |
|`k_paste` | Total number of paste (Ctrl + V) actions during the translation. |
|`k_do` | Total number of Enter actions during the translation. |
|`n_pause_geq_300` | Number of pauses of 300ms or more during the translation. |
|`len_pause_geq_300` | Total duration of pauses of 300ms or more, in milliseconds. |
|`n_pause_geq_1000` | Number of pauses of 1s or more during the translation. |
|`len_pause_geq_1000` | Total duration of pauses of 1000ms or more, in milliseconds. |
|`event_time` | Total time summed across all translation events, should be comparable to `edit_time` in most cases. |
|`num_annotations` | Number of times the translator focused the textbox for performing the translation of the sentence during the translation session. E.g. 1 means the translation was performed once and never revised. |
|`n_insert` | Number of post-editing insertions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
|`n_delete` | Number of post-editing deletions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
|`n_substitute` | Number of post-editing substitutions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
|`n_shift` | Number of post-editing shifts (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
|`tot_shifted_words` | Total amount of shifted words from all shifts present in the sentence. |
|`tot_edits` | Total of all edit types for the sentence. |
|`hter` | Human-mediated Translation Edit Rate score computed between MT and post-edited TGT (empty for modality `ht`) using the [tercom](https://github.com/jhclark/tercom) library. |
|`cer` | Character-level HTER score computed between MT and post-edited TGT (empty for modality `ht`) using [CharacTER](https://github.com/rwth-i6/CharacTER).
|`bleu` | Sentence-level BLEU score between MT and post-edited TGT (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
|`chrf` | Sentence-level chrF score between MT and post-edited TGT (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
|`time_s` | Edit time expressed in seconds. |
|`time_m` | Edit time expressed in minutes. |
|`time_h` | Edit time expressed in hours. |
|`time_per_char` | Edit time per source character, expressed in seconds. |
|`time_per_word` | Edit time per source word, expressed in seconds. |
|`key_per_char` | Proportion of keys per character needed to perform the translation. |
|`words_per_hour` | Amount of source words translated or post-edited per hour. |
|`words_per_minute` | Amount of source words translated or post-edited per minute. |
|`per_subject_visit_order` | Id denoting the order in which the translator accessed documents. 1 correspond to the first accessed document. |
|`src_text` | The original source sentence extracted from Wikinews, wikibooks or wikivoyage. |
|`mt_text` | Missing if tasktype is `ht`. Otherwise, contains the automatically-translated sentence before post-editing. |
|`tgt_text` | Final sentence produced by the translator (either via translation from scratch of `sl_text` or post-editing `mt_text`) |
|`aligned_edit` | Aligned visual representation of REF (`mt_text`), HYP (`tl_text`) and edit operations (I = Insertion, D = Deletion, S = Substitution) performed on the field. Replace `\\n` with `\n` to show the three aligned rows.|
|`src_tokens` | List of tokens obtained tokenizing `src_text` with Stanza using default params. |
|`src_annotations` | List of lists (one per `src_tokens` token) containing dictionaries (one per word, >1 for mwt) with pos, ner and other info parsed by Stanza |
|`mt_tokens` | List of tokens obtained tokenizing `mt_text` with Stanza using default params. |
|`mt_annotations` | List of lists (one per `mt_tokens` token) containing dictionaries (one per word, >1 for mwt) with pos, ner and other info parsed by Stanza |
|`tgt_tokens` | List of tokens obtained tokenizing `tgt_text` with Stanza using default params. |
|`tgt_annotations` | List of lists (one per `tgt_tokens` token) containing dictionaries (one per word, >1 for mwt) with pos, ner and other info parsed by Stanza |
### Data Splits
| config | train|
|-------:|-----:|
|`main` | 7740 (107 docs i.e. 430 sents x 18 translators) |
|`warmup`| 360 (5 docs i.e. 20 sents x 18 translators) |
#### Train Split
The `train` split contains the totality of triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation.
The following is an example of the subject `t1` post-editing a machine translation produced by Google Translate (task_type `pe1`) taken from the `train` split for Turkish. The field `aligned_edit` is showed over three lines to provide a visual understanding of its contents.
```json
{
'unit_id': 'flores101-main-tur-46-pe1-3',
'flores_id': 871,
'item_id': 'flores101-main-463',
'subject_id': 'tur_t1',
'task_type': 'pe1',
'translation_type': 'pe',
'src_len_chr': 109,
'mt_len_chr': 129.0,
'tgt_len_chr': 120,
'src_len_wrd': 17,
'mt_len_wrd': 15.0,
'tgt_len_wrd': 13,
'edit_time': 11.762999534606934,
'k_total': 31,
'k_letter': 9,
'k_digit': 0,
'k_white': 0,
'k_symbol': 0,
'k_nav': 20,
'k_erase': 2,
'k_copy': 0,
'k_cut': 0,
'k_paste': 0,
'k_do': 0,
'n_pause_geq_300': 2,
'len_pause_geq_300': 4986,
'n_pause_geq_1000': 1,
'len_pause_geq_1000': 4490,
'event_time': 11763,
'num_annotations': 2,
'last_modification_time': 1643569484,
'n_insert': 0.0,
'n_delete': 2.0,
'n_substitute': 1.0,
'n_shift': 0.0,
'tot_shifted_words': 0.0,
'tot_edits': 3.0,
'hter': 20.0,
'cer': 0.10,
'bleu': 0.0,
'chrf': 2.569999933242798,
'lang_id': 'tur',
'doc_id': 46,
'time_s': 11.762999534606934,
'time_m': 0.1960500031709671,
'time_h': 0.0032675000838935375,
'time_per_char': 0.1079174280166626,
'time_per_word': 0.6919412016868591,
'key_per_char': 0.2844036817550659,
'words_per_hour': 5202.75439453125,
'words_per_minute': 86.71257019042969,
'per_subject_visit_order': 201,
'src_text': 'As one example, American citizens in the Middle East might face different situations from Europeans or Arabs.',
'mt_text': "Bir örnek olarak, Orta Doğu'daki Amerikan vatandaşları, Avrupalılardan veya Araplardan farklı durumlarla karşı karşıya kalabilir.",
'tgt_text': "Örneğin, Orta Doğu'daki Amerikan vatandaşları, Avrupalılardan veya Araplardan farklı durumlarla karşı karşıya kalabilir.",
'aligned_edit': "REF: bir örnek olarak, orta doğu'daki amerikan vatandaşları, avrupalılardan veya araplardan farklı durumlarla karşı karşıya kalabilir.\\n
HYP: *** ***** örneğin, orta doğu'daki amerikan vatandaşları, avrupalılardan veya araplardan farklı durumlarla karşı karşıya kalabilir.\\n
EVAL: D D S"
}
```
The text is provided as-is, without further preprocessing or tokenization.
### Dataset Creation
The dataset was parsed from PET XML files into CSV format using the scripts available in the [DivEMT Github repository](https://github.com/gsarti/divemt).
Those are adapted from the ones by [Antonio Toral](https://research.rug.nl/en/persons/antonio-toral-ruiz) found at the following link: [https://github.com/antot/postediting_novel_frontiers](https://github.com/antot/postediting_novel_frontiers).
## Additional Information
### Dataset Curators
For problems related to this 🤗 Datasets version, please contact me at [g.sarti@rug.nl](mailto:g.sarti@rug.nl).
### Citation Information
```bibtex
@inproceedings{sarti-etal-2022-divemt,
title = "{D}iv{EMT}: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages",
author = "Sarti, Gabriele and
Bisazza, Arianna and
Guerberof-Arenas, Ana and
Toral, Antonio",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.532",
pages = "7795--7816",
}
``` | 14,687 | [
[
-0.02337646484375,
-0.04840087890625,
0.033935546875,
0.0233001708984375,
-0.032745361328125,
-0.0079498291015625,
-0.033172607421875,
-0.01904296875,
0.0288848876953125,
0.0305023193359375,
-0.048614501953125,
-0.06756591796875,
-0.0467529296875,
0.03707885... |
yhavinga/xsum_dutch | 2022-08-21T20:50:08.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"language:nl",
"region:us"
] | yhavinga | Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article. | @article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
} | 0 | 85 | 2022-08-21T20:29:43 | ---
pretty_name: Extreme Summarization (XSum) in Dutch
language:
- nl
paperswithcode_id: xsum_dutch
task_categories:
- summarization
task_ids:
- news-articles-summarization
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
# Dataset Card for "xsum_dutch" 🇳🇱🇧🇪 Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
The Xsum Dutch 🇳🇱🇧🇪 Dataset is an English-language dataset translated to Dutch.
*This dataset currently (Aug '22) has a single config, which is
config `default` of [xsum](https://huggingface.co/datasets/xsum) translated to Dutch
with [yhavinga/t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi).*
- **Homepage:** [https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 245.38 MB
- **Size of the generated dataset:** 507.60 MB
- **Total amount of disk used:** 752.98 MB
### Dataset Summary
Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 245.38 MB
- **Size of the generated dataset:** 507.60 MB
- **Total amount of disk used:** 752.98 MB
An example of 'validation' looks as follows.
```
{
"document": "some-body",
"id": "29750031",
"summary": "some-sentence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `document`: a `string` feature.
- `summary`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|204045| 11332|11334|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding the English version of this dataset.
The dataset was translated on Cloud TPU compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
| 6,530 | [
[
-0.04608154296875,
-0.03131103515625,
0.0032100677490234375,
0.007808685302734375,
-0.024566650390625,
-0.0020904541015625,
-0.033203125,
-0.032562255859375,
0.058502197265625,
0.0318603515625,
-0.052825927734375,
-0.06622314453125,
-0.043670654296875,
0.001... |
proteinea/deeploc | 2023-01-16T14:59:58.000Z | [
"doi:10.57967/hf/1105",
"region:us"
] | proteinea | null | null | 0 | 85 | 2022-12-12T15:48:32 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
proteinea/remote_homology | 2022-12-12T16:20:18.000Z | [
"doi:10.57967/hf/1107",
"region:us"
] | proteinea | null | null | 2 | 85 | 2022-12-12T15:55:43 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
irds/trec-robust04 | 2023-01-05T03:52:55.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 1 | 85 | 2023-01-05T03:52:49 | ---
pretty_name: '`trec-robust04`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `trec-robust04`
The `trec-robust04` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=528,155
- `queries` (i.e., topics); count=250
- `qrels`: (relevance assessments); count=311,410
This dataset is used by: [`trec-robust04_fold1`](https://huggingface.co/datasets/irds/trec-robust04_fold1), [`trec-robust04_fold2`](https://huggingface.co/datasets/irds/trec-robust04_fold2), [`trec-robust04_fold3`](https://huggingface.co/datasets/irds/trec-robust04_fold3), [`trec-robust04_fold4`](https://huggingface.co/datasets/irds/trec-robust04_fold4), [`trec-robust04_fold5`](https://huggingface.co/datasets/irds/trec-robust04_fold5)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-robust04', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...}
queries = load_dataset('irds/trec-robust04', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/trec-robust04', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Voorhees2004Robust,
title={Overview of the TREC 2004 Robust Retrieval Track},
author={Ellen Voorhees},
booktitle={TREC},
year={2004}
}
```
| 1,844 | [
[
-0.0228271484375,
-0.0254974365234375,
0.0107879638671875,
0.0030269622802734375,
-0.01160430908203125,
0.0081787109375,
-0.0001112818717956543,
-0.0105438232421875,
0.0214080810546875,
0.02325439453125,
-0.042236328125,
-0.07257080078125,
-0.0277252197265625,
... |
Francesco/axial-mri | 2023-03-30T09:39:28.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | 0 | 85 | 2023-03-30T09:39:10 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': axial-MRI
'1': negative
'2': positive
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: axial-mri
tags:
- rf100
---
# Dataset Card for axial-mri
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/axial-mri
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
axial-mri
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/axial-mri
### Citation Information
```
@misc{ axial-mri,
title = { axial mri Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/axial-mri } },
url = { https://universe.roboflow.com/object-detection/axial-mri },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | 3,337 | [
[
-0.045013427734375,
-0.0296173095703125,
0.0087890625,
-0.003879547119140625,
-0.044281005859375,
-0.014495849609375,
-0.00250244140625,
-0.041351318359375,
0.0264892578125,
0.019989013671875,
-0.039520263671875,
-0.0694580078125,
-0.0330810546875,
0.0294036... |
MU-NLPC/Calc-ape210k | 2023-10-30T15:56:39.000Z | [
"license:mit",
"arxiv:2305.15017",
"arxiv:2009.11506",
"region:us"
] | MU-NLPC | null | null | 10 | 85 | 2023-05-22T14:20:16 | ---
license: mit
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_chinese
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
splits:
- name: train
num_bytes: 111988047
num_examples: 195179
- name: validation
num_bytes: 1172933
num_examples: 1783
- name: test
num_bytes: 1157061
num_examples: 1785
download_size: 50827709
dataset_size: 114318041
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_chinese
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
splits:
- name: train
num_bytes: 111988047
num_examples: 195179
- name: validation
num_bytes: 2798479
num_examples: 4867
- name: test
num_bytes: 2793355
num_examples: 4867
download_size: 52234086
dataset_size: 117579881
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: train
path: original-splits/train-*
- split: validation
path: original-splits/validation-*
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-ape210k
## Summary
This dataset is an instance of Ape210K dataset, converted to a simple HTML-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction Process
First, we translated the questions into English using Google Translate. Next, we parsed the equations and the results. We linearized
the equations into a sequence of elementary steps and evaluated them using a sympy-based calculator. We numerically compare the output
with the result in the data and remove all examples where they do not match (less than 3% loss in each split). Finally, we save the
chain of steps in the HTML-like language in the `chain` column. We keep the original columns in the dataset for convenience. We also perform
in-dataset and cross-dataset data-leak detection within [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
Specifically for Ape210k, we removed parts of the validation and test split, with around 1700 remaining in each.
You can read more information about this process in our [Calc-X paper](https://arxiv.org/abs/2305.15017).
## Data splits
The default config contains filtered splits with data leaks removed.
You can load it using:
```python
datasets.load_dataset("MU-NLPC/calc-ape210k")
```
In the `original-splits` config, the data splits are unfiltered and correspond to the original Ape210K dataset. See [ape210k dataset github](https://github.com/Chenny0808/ape210k) and [the paper](https://arxiv.org/abs/2009.11506) for more info.
You can load it using:
```python
datasets.load_dataset("MU-NLPC/calc-ape210k", "original-splits")
```
## Attributes
- **id** - id of the example
- **question** - the description of the math problem. Automatically translated from the `question_chinese` column into English using Google Translate
- **question_chinese** - the original description of the math problem in Chinese
- **chain** - linearized `equation`, sequence of arithmetic steps in HTML-like language that can be evaluated using our sympy-based calculator
- **result** - result as a string (can be an integer, float, or a fraction)
- **result_float** - result, converted to a float
- **equation** - a nested expression that evaluates to the correct answer
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original Ape210k dataset and repo**](https://github.com/Chenny0808/ape210k)
- [**original Ape210k paper**](https://arxiv.org/abs/2009.11506)
## Licence
MIT, consistently with the original dataset.
## Cite
If you use this version of the dataset in research, please cite the [original Ape210k paper](https://arxiv.org/abs/2009.11506), and the [Calc-X paper](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
``` | 6,194 | [
[
-0.03973388671875,
-0.037353515625,
0.01540374755859375,
0.0176849365234375,
0.003208160400390625,
-0.0147705078125,
-0.0141448974609375,
-0.0291595458984375,
0.0162811279296875,
0.028045654296875,
-0.0579833984375,
-0.01430511474609375,
-0.0228271484375,
0.... |
dmayhem93/agieval-aqua-rat | 2023-06-18T17:14:34.000Z | [
"license:apache-2.0",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 0 | 85 | 2023-06-18T03:50:28 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 93696
num_examples: 254
download_size: 0
dataset_size: 93696
license: apache-2.0
---
# Dataset Card for "agieval-aqua-rat"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
Raw dataset: https://github.com/deepmind/AQuA
Copyright 2017 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{ling-etal-2017-program,
title = "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems",
author = "Ling, Wang and
Yogatama, Dani and
Dyer, Chris and
Blunsom, Phil",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P17-1015",
doi = "10.18653/v1/P17-1015",
pages = "158--167",
abstract = "Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.",
} | 2,880 | [
[
-0.029632568359375,
-0.057098388671875,
0.024566650390625,
0.0265655517578125,
-0.00533294677734375,
0.00966644287109375,
-0.01154327392578125,
-0.0267486572265625,
0.000820159912109375,
0.017333984375,
-0.061859130859375,
-0.026947021484375,
-0.044952392578125,... |
dmayhem93/agieval-lsat-rc | 2023-06-18T17:27:15.000Z | [
"license:mit",
"arxiv:2304.06364",
"arxiv:2104.06598",
"region:us"
] | dmayhem93 | null | null | 0 | 85 | 2023-06-18T12:50:49 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 1136305
num_examples: 269
download_size: 322710
dataset_size: 1136305
license: mit
---
# Dataset Card for "agieval-lsat-rc"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
Raw datset: https://github.com/zhongwanjun/AR-LSAT
MIT License
Copyright (c) 2022 Wanjun Zhong
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{zhong2021arlsat,
title={AR-LSAT: Investigating Analytical Reasoning of Text},
author={Wanjun Zhong and Siyuan Wang and Duyu Tang and Zenan Xu and Daya Guo and Jiahai Wang and Jian Yin and Ming Zhou and Nan Duan},
year={2021},
eprint={2104.06598},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{wang2022lsat,
title={From lsat: The progress and challenges of complex reasoning},
author={Wang, Siyuan and Liu, Zhongkun and Zhong, Wanjun and Zhou, Ming and Wei, Zhongyu and Chen, Zhumin and Duan, Nan},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
year={2022},
publisher={IEEE}
} | 2,550 | [
[
-0.037689208984375,
-0.04229736328125,
0.0207672119140625,
0.0155487060546875,
-0.01116180419921875,
-0.01441192626953125,
0.00008314847946166992,
-0.03289794921875,
0.00605010986328125,
0.035308837890625,
-0.038116455078125,
-0.022216796875,
-0.0311279296875,
... |
dmayhem93/agieval-sat-en | 2023-06-18T17:30:59.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 2 | 85 | 2023-06-18T12:50:59 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 1019350
num_examples: 206
download_size: 265465
dataset_size: 1019350
license: mit
---
# Dataset Card for "agieval-sat-en"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1,833 | [
[
-0.0255126953125,
-0.04022216796875,
0.0104217529296875,
0.0227508544921875,
-0.023101806640625,
-0.00907135009765625,
0.0030117034912109375,
-0.03240966796875,
0.0023174285888671875,
0.038726806640625,
-0.04937744140625,
-0.03814697265625,
-0.029998779296875,
... |
dmayhem93/agieval-sat-en-without-passage | 2023-06-18T17:31:43.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 0 | 85 | 2023-06-18T12:51:12 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 154762
num_examples: 206
download_size: 85136
dataset_size: 154762
license: mit
---
# Dataset Card for "agieval-sat-en-without-passage"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1,846 | [
[
-0.0196685791015625,
-0.037933349609375,
0.024688720703125,
0.025970458984375,
-0.0272369384765625,
-0.01511383056640625,
0.00811767578125,
-0.03485107421875,
-0.00009912252426147461,
0.046356201171875,
-0.05206298828125,
-0.03997802734375,
-0.03326416015625,
... |
benjamin/compoundpiece | 2023-07-24T17:03:10.000Z | [
"license:mit",
"arxiv:2305.14214",
"region:us"
] | benjamin | null | null | 1 | 85 | 2023-07-23T13:50:23 | ---
configs:
- config_name: wiktionary
data_files:
- split: train
path: "wiktionary/train.csv"
- split: validation
path: "wiktionary/valid.csv"
- config_name: web
data_files:
- split: train
path: "web/train.csv"
- split: validation
path: "web/valid.csv"
license: mit
---
# CompoundPiece
Dataset of compound words for the paper [CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models](https://arxiv.org/abs/2305.14214).
Load the balanced dataset of hyphens and non-hyphenated words scraped from the web (used as pretraining data):
```python
load_dataset("benjamin/compoundpiece", "web")
```
Load the dataset of compound and non-compound words (used for fine-tuning):
```python
load_dataset("benjamin/compoundpiece", "wiktionary")
```
# Citation
```
@article{minixhofer2023compoundpiece,
title={CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models},
author={Minixhofer, Benjamin and Pfeiffer, Jonas and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2305.14214},
year={2023}
}
```
# License
MIT | 1,117 | [
[
-0.03350830078125,
-0.05804443359375,
0.0038013458251953125,
0.03338623046875,
-0.041748046875,
0.00852203369140625,
-0.03497314453125,
-0.024078369140625,
0.004360198974609375,
0.0284423828125,
-0.04986572265625,
-0.037445068359375,
-0.037750244140625,
0.02... |
tanganke/EuroSAT | 2023-08-01T08:09:39.000Z | [
"task_categories:image-classification",
"region:us"
] | tanganke | null | null | 0 | 85 | 2023-08-01T07:29:45 | ---
task_categories:
- image-classification
---
# EuroSAT
EuroSAT: Downloaded from https://github.com/phelber/EuroSAT (direct link: https://madm.dfki.de/files/sentinel/EuroSAT.zip).
For this dataset we randomly split the downloaded data into train/validation/test (21,600/2,700/2,700 samples). | 295 | [
[
-0.056915283203125,
-0.03472900390625,
0.032806396484375,
0.01776123046875,
-0.0279693603515625,
0.006595611572265625,
0.017425537109375,
-0.01849365234375,
0.0210418701171875,
0.028411865234375,
-0.04998779296875,
-0.035980224609375,
-0.02947998046875,
0.02... |
doanhieung/vi-stsbenchmark | 2023-08-28T01:26:09.000Z | [
"license:mit",
"region:us"
] | doanhieung | null | null | 2 | 85 | 2023-08-28T01:25:05 | ---
license: mit
---
The STSbenchmark dataset for Vietnamese | 60 | [
[
0.005336761474609375,
-0.02801513671875,
0.040802001953125,
0.0009059906005859375,
-0.01983642578125,
0.0158843994140625,
0.0029087066650390625,
0.003917694091796875,
-0.007389068603515625,
0.04815673828125,
-0.015594482421875,
-0.07574462890625,
-0.049530029296... |
yzhuang/autotree_automl_10000_credit_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T02:24:10.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 85 | 2023-09-07T02:24:03 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 126879367
dataset_size: 472880000
---
# Dataset Card for "autotree_automl_10000_credit_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 844 | [
[
-0.0226593017578125,
-0.0144500732421875,
0.0198822021484375,
0.02252197265625,
-0.0132598876953125,
0.016265869140625,
0.043975830078125,
-0.002628326416015625,
0.054595947265625,
0.0277557373046875,
-0.057098388671875,
-0.044281005859375,
-0.046630859375,
... |
ai4bharat/IN22-Conv | 2023-09-12T11:11:17.000Z | [
"task_categories:translation",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:1K<n<10K",
"language:as",
"language:bn",
"language:brx",
"language:doi",
"language:en",
"language:gom",
"language:gu",
"language:hi",
"langua... | ai4bharat | IN-22 is a newly created comprehensive benchmark for evaluating machine translation performance in multi-domain, n-way parallel contexts across 22 Indic languages.
IN22-Conv is the conversation domain subset of IN22. It is designed to assess translation quality in typical day-to-day conversational-style applications.
Currently, we use it for sentence-level evaluation of MT systems but can be repurposed for document translation evaluation as well. | @article{ai4bharat2023indictrans2,
title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages},
author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan},
year = {2023},
journal = {arXiv preprint arXiv: 2305.16307}
} | 2 | 85 | 2023-09-09T17:35:58 | ---
language:
- as
- bn
- brx
- doi
- en
- gom
- gu
- hi
- kn
- ks
- mai
- ml
- mr
- mni
- ne
- or
- pa
- sa
- sat
- sd
- ta
- te
- ur
language_details: >-
asm_Beng, ben_Beng, brx_Deva, doi_Deva, eng_Latn, gom_Deva, guj_Gujr,
hin_Deva, kan_Knda, kas_Arab, mai_Deva, mal_Mlym, mar_Deva, mni_Mtei,
npi_Deva, ory_Orya, pan_Guru, san_Deva, sat_Olck, snd_Deva, tam_Taml,
tel_Telu, urd_Arab
license: cc-by-4.0
language_creators:
- expert-generated
multilinguality:
- multilingual
- translation
pretty_name: in22-conv
size_categories:
- 1K<n<10K
task_categories:
- translation
---
# IN22-Conv
IN-22 is a newly created comprehensive benchmark for evaluating machine translation performance in multi-domain, n-way parallel contexts across 22 Indic languages. IN22-Conv is the conversation domain subset of IN22. It is designed to assess translation quality in typical day-to-day conversational-style applications. The evaluation subset consists of 1024 sentences translated across 22 Indic languages enabling evaluation of MT systems across 506 directions.
Currently, we use it for sentence-level evaluation of MT systems but can be repurposed for document translation evaluation as well.
Here is the domain distribution of our IN22-Conv evaluation subset.
<table style="width:25%">
<tr>
<td>domain</td>
<td>count</td>
</tr>
<tr>
<td>hobbies</td>
<td>120</td>
</tr>
<tr>
<td>daily_dialogue</td>
<td>117</td>
</tr>
<tr>
<td>government</td>
<td>116</td>
</tr>
<tr>
<td>geography</td>
<td>114</td>
</tr>
<tr>
<td>sports</td>
<td>100</td>
</tr>
<tr>
<td>entertainment</td>
<td>97</td>
</tr>
<tr>
<td>history</td>
<td>97</td>
</tr>
<tr>
<td>legal</td>
<td>96</td>
</tr>
<tr>
<td>arts</td>
<td>95</td>
</tr>
<tr>
<td>college_life</td>
<td>94</td>
</tr>
<tr>
<td>tourism</td>
<td>91</td>
</tr>
<tr>
<td>school_life</td>
<td>87</td>
</tr>
<tr>
<td>insurance</td>
<td>82</td>
</tr>
<tr>
<td>culture</td>
<td>73</td>
</tr>
<tr>
<td>healthcare</td>
<td>67</td>
</tr>
<tr>
<td>banking</td>
<td>57</td>
</tr>
<tr>
<td>total</td>
<td>1503</td>
</tr>
</table>
Please refer to the `Appendix E: Dataset Card` of the [preprint](https://arxiv.org/abs/2305.16307) on detailed description of dataset curation, annotation and quality control process.
### Dataset Structure
#### Dataset Fields
- `id`: Row number for the data entry, starting at 1.
- `doc_id`: Unique identifier of the conversation.
- `sent_id`: Unique identifier of the sentence order in each conversation.
- `topic`: The specific topic of the conversation within the domain.
- `domain`: The domain of the conversation.
- `prompt`: The prompt provided to annotators to simulate the conversation.
- `scenario`: The scenario or context in which the conversation takes place.
- `speaker`: The speaker identifier in the conversation.
- `turn`: The turn within the conversation.
#### Data Instances
A sample from the `gen` split for the English language (`eng_Latn` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
"id": 1,
"doc_id": 0,
"sent_id": 1,
"topic": "Festivities",
"domain": "culture",
"prompt": "14th April a holiday",
"scenario": "Historical importance",
"speaker": 1,
"turn": 1,
"sentence": "Mom, let's go for a movie tomorrow."
}
```
When using a hyphenated pairing or using the `all` function, data will be presented as follows:
```python
{
"id": 1,
"doc_id": 0,
"sent_id": 1,
"topic": "Festivities",
"domain": "culture",
"prompt": "14th April a holiday",
"scenario": "Historical importance",
"speaker": 1,
"turn": 1,
"sentence_eng_Latn": "Mom, let's go for a movie tomorrow.",
"sentence_hin_Deva": "माँ, चलो कल एक फिल्म देखने चलते हैं।"
}
```
#### Sample Conversation
<table>
<tr>
<td>Speaker</td>
<td>Turn</td>
</tr>
<tr>
<td>Speaker 1</td>
<td>Mom, let's go for a movie tomorrow. I don't have to go to school. It is a holiday.</td>
</tr>
<tr>
<td>Speaker 2</td>
<td>Oh, tomorrow is the 14th of April right? Your dad will also have the day off from work. We can make a movie plan!</td>
</tr>
<tr>
<td>Speaker 1</td>
<td>That's a good news! Why is it a holiday though? Are all schools, colleges and offices closed tomorrow?</td>
</tr>
<tr>
<td>Speaker 2</td>
<td>It is Ambedkar Jayanti tomorrow! This day is celebrated annually to mark the birth of Dr. B. R Ambedkar. Have you heard of him?</td>
</tr>
<tr>
<td>Speaker 1</td>
<td>I think I have seen him in my History and Civics book. Is he related to our Constitution?</td>
</tr>
<tr>
<td>Speaker 2</td>
<td>Absolutely! He is known as the father of the Indian Constitution. He was a civil rights activist who played a major role in formulating the Constitution. He played a crucial part in shaping the vibrant democratic structure that India prides itself upon.</td>
</tr>
<tr>
<td></td>
<td>...</td>
</tr>
</table>
### Usage Instructions
```python
from datasets import load_dataset
# download and load all the pairs
dataset = load_dataset("ai4bharat/IN22-Conv", "all")
# download and load specific pairs
dataset = load_dataset("ai4bharat/IN22-Conv", "eng_Latn-hin_Deva")
```
### Languages Covered
<table style="width: 40%">
<tr>
<td>Assamese (asm_Beng)</td>
<td>Kashmiri (Arabic) (kas_Arab)</td>
<td>Punjabi (pan_Guru)</td>
</tr>
<tr>
<td>Bengali (ben_Beng)</td>
<td>Kashmiri (Devanagari) (kas_Deva)</td>
<td>Sanskrit (san_Deva)</td>
</tr>
<tr>
<td>Bodo (brx_Deva)</td>
<td>Maithili (mai_Deva)</td>
<td>Santali (sat_Olck)</td>
</tr>
<tr>
<td>Dogri (doi_Deva)</td>
<td>Malayalam (mal_Mlym)</td>
<td>Sindhi (Arabic) (snd_Arab)</td>
</tr>
<tr>
<td>English (eng_Latn)</td>
<td>Marathi (mar_Deva)</td>
<td>Sindhi (Devanagari) (snd_Deva)</td>
</tr>
<tr>
<td>Konkani (gom_Deva)</td>
<td>Manipuri (Bengali) (mni_Beng)</td>
<td>Tamil (tam_Taml)</td>
</tr>
<tr>
<td>Gujarati (guj_Gujr)</td>
<td>Manipuri (Meitei) (mni_Mtei)</td>
<td>Telugu (tel_Telu)</td>
</tr>
<tr>
<td>Hindi (hin_Deva)</td>
<td>Nepali (npi_Deva)</td>
<td>Urdu (urd_Arab)</td>
</tr>
<tr>
<td>Kannada (kan_Knda)</td>
<td>Odia (ory_Orya)</td>
</tr>
</table>
### Citation
If you consider using our work then please cite using:
```
@article{ai4bharat2023indictrans2,
title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages},
author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan},
year = {2023},
journal = {arXiv preprint arXiv: 2305.16307}
}
```
| 7,627 | [
[
-0.032318115234375,
-0.04290771484375,
0.00521087646484375,
0.0221405029296875,
-0.032958984375,
0.01256561279296875,
-0.02197265625,
-0.01067352294921875,
0.0204620361328125,
0.01385498046875,
-0.043670654296875,
-0.0445556640625,
-0.040924072265625,
0.0197... |
mapama247/wikihow_es | 2023-09-19T12:48:50.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:es",
"license:cc-by-nc-sa-3.0",
"Spanish",
"WikiHow",
"Wiki Articles",
"Tutorials... | mapama247 | null | null | 0 | 85 | 2023-09-18T08:39:33 | ---
pretty_name: WikiHow-ES
license: cc-by-nc-sa-3.0
size_categories: 1K<n<10K
language: es
multilinguality: monolingual
task_categories:
- text-classification
- question-answering
- conversational
- summarization
tags:
- Spanish
- WikiHow
- Wiki Articles
- Tutorials
- Step-By-Step
- Instruction Tuning
---
### Dataset Summary
Articles retrieved from the [Spanish WikiHow website](https://es.wikihow.com) on September 2023.
Each article contains a tutorial about a specific topic. The format is always a "How to" question
followed by a detailed step-by-step explanation. In some cases, the response includes several methods.
The main idea is to use this data for instruction tuning of Spanish LLMs, but given its nature it
could also be used for other tasks such as text classification or summarization.
### Languages
- Spanish (ES)
### Usage
To load the full dataset:
```python
from datasets import load_dataset
all_articles = load_dataset("mapama247/wikihow_es")
print(all_articles.num_rows) # output: {'train': 7380}
```
To load only examples from a specific category:
```python
from datasets import load_dataset
sports_articles = load_dataset("mapama247/wikihow_es", "deportes")
print(sports_articles.num_rows) # output: {'train': 201}
```
List of available categories, with the repective number of examples:
```
computadoras-y-electrónica 821
salud 804
pasatiempos 729
cuidado-y-estilo-personal 724
carreras-y-educación 564
en-la-casa-y-el-jardín 496
finanzas-y-negocios 459
comida-y-diversión 454
relaciones 388
mascotas-y-animales 338
filosofía-y-religión 264
arte-y-entretenimiento 254
en-el-trabajo 211
adolescentes 201
deportes 201
vida-familiar 147
viajes 139
automóviles-y-otros-vehículos 100
días-de-fiesta-y-tradiciones 86
```
### Supported Tasks
This dataset can be used to train a model for...
- `instruction-tuning`
- `text-classification`
- `question-answering`
- `conversational`
- `summarization`
## Dataset Structure
### Data Instances
```python
{
'category': str,
'question': str,
'introduction': str,
'answers': List[str],
'short_answers': List[str],
'url': str,
'num_answers': int,
'num_refs': int,
'expert_author': bool,
}
```
### Data Fields
- `category`: The category (from [this list](https://es.wikihow.com/Especial:CategoryListing)) to which the example belongs to.
- `label`: Numerical representation of the category, for text classification purposes.
- `question`: The article's title, which always starts with "¿Cómo ...".
- `introduction`: Introductory text that precedes the step-by-step explanation.
- `answers`: List of complete answers, with the full explanation of each step.
- `short_answers`: List of shorter answers that only contain one-sentence steps.
- `num_answers`: The number of alternative answers provided (e.g. length of `answers`).
- `num_ref`: Number of references provided in the article.
- `expert_authors`: Whether the article's author claims to be an expert on the topic or not.
- `url`: The URL address of the original article.
### Data Splits
There is only one split (`train`) that contains a total of 7,380 examples.
## Dataset Creation
### Curation Rationale
This dataset was created for language model alignment to end tasks and user preferences.
### Source Data
How-To questions with detailed step-by-step answers, retrieved from the WikiHow website.
#### Data Collection and Normalization
All articles available in September 2023 were extracted, no filters applied.
Along with the article's content, some metadata was retrieved as well.
#### Source language producers
WikiHow users. All the content is human-generated.
### Personal and Sensitive Information
The data does not include personal or sensitive information.
## Considerations
### Social Impact
The Spanish community can benefit from the high-quality data provided by this dataset.
### Bias
No post-processing steps have been applied to mitigate potential social biases.
## Additional Information
### Curators
Marc Pàmes @ Barcelona Supercomputing Center.
### License
This dataset is licensed under a **Creative Commons CC BY-NC-SA 3.0** license.
Quote from [WikiHow's Terms of Use](https://www.wikihow.com/wikiHow:Terms-of-Use):
> All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as
> provided herein. The Creative Commons license allows such user generated text content to be used freely for personal,
> non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of
> the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction
> on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants
> each User of the Service a license to all text content that Users contribute to the Service under the terms and
> conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully.
> You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as
> you wish, whether for commercial or non-commercial purposes.
| 5,544 | [
[
-0.03436279296875,
-0.05352783203125,
0.007785797119140625,
0.016387939453125,
-0.0194091796875,
-0.003009796142578125,
-0.019866943359375,
-0.01447296142578125,
0.0261688232421875,
0.0279388427734375,
-0.05987548828125,
-0.06158447265625,
-0.033782958984375,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.