id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
XiangPan/waimai_10k | 2022-04-14T22:38:31.000Z | [
"region:us"
] | XiangPan | null | null | null | 1 | 3 | # Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
mwong/fever-claim-related | 2022-10-25T10:06:56.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
... | mwong | null | null | null | 2 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence. |
dl4phys/top_tagging | 2022-04-18T07:43:02.000Z | [
"license:cc-by-4.0",
"arxiv:1902.09914",
"region:us"
] | dl4phys | null | null | null | 0 | 3 | ---
license: cc-by-4.0
---
# Dataset Card for Top Quark Tagging
## Table of Contents
- [Dataset Card for Top Quark Tagging](#dataset-card-for-top-quark-tagging)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/2603256
- **Paper:** https://arxiv.org/abs/1902.09914
- **Point of Contact:** [Gregor Kasieczka](gregor.kasieczka@uni-hamburg.de)
### Dataset Summary
Top Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top-quark signal and mixed quark-gluon background jets are produced with Pythia8 with its default tune for a center-of-mass energy of 14 TeV. Multiple interactions and pile-up are ignored. The leading 200 jet constituent four-momenta \\( (E, p_x, p_y, p_z) \\) are stored, with zero-padding applied to jets with fewer than 200 constituents.
### Supported Tasks and Leaderboards
- `tabular-classification`: The dataset can be used to train a model for tabular binary classification, which consists in predicting whether an event is produced from a top signal or quark-gluon background. Success on this task is typically measured by achieving a *high* [accuracy](https://huggingface.co/metrics/accuracy) and AUC score.
## Dataset Structure
### Data Instances
Each instance in the dataset consists of the four-momenta of the leading 200 jet constituents, sorted by \\(p_T\\). For jets with fewer than 200 constituents, zero-padding is applied. The four-momenta of the top-quark are also provided, along with a label in the `is_signal_new` column to indicate whether the event stems from a top-quark (1) or QCD background (0). An example instance looks as follows:
```
{'E_0': 474.0711364746094,
'PX_0': -250.34703063964844,
'PY_0': -223.65196228027344,
'PZ_0': -334.73809814453125,
...
'E_199': 0.0,
'PX_199': 0.0,
'PY_199': 0.0,
'PZ_199': 0.0,
'truthE': 0.0,
'truthPX': 0.0,
'truthPY': 0.0,
'truthPZ': 0.0,
'ttv': 0,
'is_signal_new': 0}
```
### Data Fields
The fields in the dataset have the following meaning:
- `E_i`: the energy of jet constituent \\(i\\).
- `PX_i`: the \\(x\\) component of the jet constituent's momentum
- `PY_i`: the \\(y\\) component of the jet constituent's momentum
- `PZ_i`: the \\(z\\) component of the jet constituent's momentum
- `truthE`: the energy of the top-quark
- `truthPX`: the \\(x\\) component of the top quark's momentum
- `truthPY`: the \\(y\\) component of the top quark's momentum
- `truthPZ`: the \\(z\\) component of the top quark's momentum
- `ttv`: a flag that indicates which split (train, validation, or test) that a jet belongs to. Redundant since each split is provided as a separate dataset
- `is_signal_new`: the label for each jet. A 1 indicates a top-quark, while a 0 indicates QCD background.
### Data Splits
| | train | validation | test |
|------------------|--------:|-----------:|-------:|
| Number of events | 1211000 | 403000 | 404000 |
### Licensing Information
This dataset is released under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) license.
### Citation Information
```
@dataset{kasieczka_gregor_2019_2603256,
author = {Kasieczka, Gregor and
Plehn, Tilman and
Thompson, Jennifer and
Russel, Michael},
title = {Top Quark Tagging Reference Dataset},
month = mar,
year = 2019,
publisher = {Zenodo},
version = {v0 (2018\_03\_27)},
doi = {10.5281/zenodo.2603256},
url = {https://doi.org/10.5281/zenodo.2603256}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
|
student/birds_400 | 2022-04-18T03:15:55.000Z | [
"region:us"
] | student | null | null | null | 0 | 3 | 鸟类400.物种图像分类
58388训练集,2000测试测试集,2000验证图像224X224X3 jpg格式
400种鸟类的数据集。58388张训练图像、2000张测试图像(每种5张图像)和2000张验证图像(每种5张图像)。这是一个非常高质量的数据集,每张图像中只有一只鸟,鸟通常占据图像中至少50%的像素。因此,即使是一个中等复杂的模型也能在90%的范围内实现训练和测试精度。
所有图像均为jpg格式的224 X 224 X 3彩色图像。数据集包括列车集、测试集和验证集。每套包含400个子目录,每种鸟类一个。如果使用Keras ImageDataGenerator,则数据结构非常方便。flowfromdirectory创建列车、测试和有效数据生成器。数据集还包括一个鸟类物种档案。csv。此cvs文件包含三列。“文件路径”列包含图像文件的文件路径。“标签”列包含与图像文件关联的类名。鸟类种类。如果使用df=pandas读入csv文件。birdscsv(Bird Species.csv)将创建一个pandas数据帧,然后可以将其拆分为traindf、testdf和validdf数据帧,以创建您自己的数据划分为train、test和validdf数据集。
注:数据集中的测试和验证图像是手工选择的“最佳”图像,因此使用这些数据集与创建自己的测试和验证集相比,您的模型可能会获得最高的准确度分数。然而,就看不见的图像上的模型性能而言,后一种情况更为准确。
这些图片是通过网络搜索按物种名称收集的。下载一个物种的图像文件后,使用我开发的python duplicate image detector程序检查其重复图像。删除所有检测到的重复项,以防止它们在训练集、测试集和验证集之间成为共同的图像。
之后,对图像进行裁剪,使鸟占据图像中至少50%的像素。然后,这些图像以jpg格式调整为224x224 X3。裁剪确保了当CNN对其进行处理时,图像中有足够的信息来创建高度准确的分类器。即使是一个中等稳健的模型,也应在高90%的范围内实现训练、验证和测试精度。由于数据集很大,我建议您尝试使用150 X 150 X3的模型和图像大小进行训练,以减少训练时间。所有文件也从每个物种的一个开始按顺序编号。所以测试图像被命名为1。jpg至5。jpg。对于验证图像也是如此。训练图像也用“零”填充顺序编号。例如001。jpg,002。jpg…010。jpg,011。jpg…。。099.jpg,100jpg,102。当与python文件函数和目录中的Keras流一起使用时,zero的填充保留了文件顺序。
训练集是不平衡的,每个物种有不同数量的文件。然而,每个物种至少有120个训练图像文件。这种不平衡并没有影响我的内核分类器,因为它在测试集上达到了98%以上的准确率。
数据集中一个显著的不平衡是雄性物种图像与雌性物种图像的比例。大约85%的图片是男性的,15%是女性的。典型的雄性动物的肤色要多样化得多,而一个物种的雌性动物通常是平淡无奇的。因此,男性和女性的形象可能看起来完全不同。几乎所有的测试和验证图像都来自该物种的雄性。因此,分类器可能无法在雌性物种图像上表现良好。 |
huggingface/image-classification-test-sample | 2022-04-19T08:02:02.000Z | [
"region:us"
] | huggingface | null | null | null | 1 | 3 | Entry not found |
taln-ls2n/wikinews-fr-100 | 2022-09-23T07:38:18.000Z | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:fr",
"license:cc-by-4.0",
"region:us"
] | taln-ls2n | Wikinews-fr-100 benchmark dataset for keyphrase extraction an generation. | @inproceedings{bougouin-etal-2013-topicrank,
title = "{T}opic{R}ank: Graph-Based Topic Ranking for Keyphrase Extraction",
author = "Bougouin, Adrien and
Boudin, Florian and
Daille, B{\'e}atrice",
booktitle = "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
month = oct,
year = "2013",
address = "Nagoya, Japan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I13-1062",
pages = "543--551",
} | null | 1 | 3 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- fr
license:
- cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- n<1K
pretty_name: Wikinews-fr-100
---
# Wikinews-fr-100 Benchmark Dataset for Keyphrase Generation
## About
Wikinews-fr-100 is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 100 news articles in French collected from [wikinews](https://fr.wikinews.org/wiki/Accueil).
Keyphrases were annotated by readers (students in computer science) in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the dataset can be found in the original paper [(Bougouin et al., 2013)][bougouin-2013].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract.
Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Test | 100 | 306.9 | 9.64 | 95.91 | 1.40 | 0.85 | 1.84 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Bougouin et al., 2013) Adrien Bougouin, Florian Boudin, and Béatrice Daille. 2013.
[TopicRank: Graph-Based Topic Ranking for Keyphrase Extraction][bougouin-2013].
In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[bougouin-2013]: https://aclanthology.org/I13-1062/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ |
mwong/climatetext-claim-related-evaluation | 2022-10-25T10:08:44.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a climate related claim and evidence, predict if claim is related to evidence. |
mwong/climatetext-evidence-related-evaluation | 2022-10-25T10:08:46.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a climate related claim and evidence, predict if evidence is related to claim. |
mwong/climatetext-climate_evidence-claim-related-evaluation | 2022-10-25T10:08:48.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if claim is related to evidence. |
mwong/climatetext-claim-climate_evidence-related-evaluation | 2022-10-25T10:08:50.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if evidence is related to claim. |
mwong/climatetext-evidence-claim-pair-related-evaluation | 2022-10-25T10:08:53.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a climate related evidence and claim, predict if pair is related. |
mwong/climatetext-claim-evidence-pair-related-evaluation | 2022-10-25T10:08:55.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if pair is related. |
h4iku/coconut_javascript2010_preprocessed | 2022-04-21T20:34:35.000Z | [
"region:us"
] | h4iku | null | null | null | 0 | 3 | Entry not found |
bigscience-data/roots_fr_uncorpus | 2022-12-12T10:29:02.000Z | [
"language:fr",
"license:cc-by-4.0",
"region:us"
] | bigscience-data | null | null | null | 1 | 3 | ---
language: fr
license: cc-by-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_fr_uncorpus
# uncorpus
- Dataset uid: `uncorpus`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.8023 % of total
- 10.7390 % of ar
- 5.7970 % of fr
- 9.7477 % of es
- 2.0417 % of en
- 1.2540 % of zh
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
|
Fhrozen/tau_srir_db | 2022-12-03T03:27:05.000Z | [
"task_categories:audio-classification",
"annotations_creators:unknown",
"language_creators:unknown",
"size_categories:n<1K",
"source_datasets:unknown",
"license:unknown",
"audio-slot-filling",
"region:us"
] | Fhrozen | null | null | null | 0 | 3 | ---
annotations_creators:
- unknown
language_creators:
- unknown
license: unknown
size_categories:
- n<1K
source_datasets:
- unknown
task_categories:
- audio-classification
task_ids: []
tags:
- audio-slot-filling
---
# TAU Spatial Room Impulse Response Database (TAU-SRIR DB)
## Important
**This is a copy from the Zenodo Original one**
## Description
[Audio Research Group / Tampere University](https://webpages.tuni.fi/arg/)
AUTHORS
**Tampere University**
- Archontis Politis ([contact](mailto:archontis.politis@tuni.fi), [profile](https://scholar.google.fi/citations?user=DuCqB3sAAAAJ&hl=en))
- Sharath Adavanne ([contact](mailto:sharath.adavanne@tuni.fi), [profile](https://www.aane.in))
- Tuomas Virtanen ([contact](mailto:tuomas.virtanen@tuni.fi), [profile](https://homepages.tuni.fi/tuomas.virtanen/))
**Data Collection 2019-2020**
- Archontis Politis
- Aapo Hakala
- Ali Gohar
**Data Collection 2017-2018**
- Sharath Adavanne
- Aapo Hakala
- Eemi Fagerlund
- Aino Koskimies
The **TAU Spatial Room Impulse Response Database (TAU-SRIR DB)** database contains spatial room impulse responses (SRIRs) captured in various spaces of Tampere University (TAU), Finland, for a fixed receiver position and multiple source positions per room, along with separate recordings of spatial ambient noise captured at the same recording point. The dataset is intended for emulation of spatial multichannel recordings for evaluation and/or training of multichannel processing algorithms in realistic reverberant conditions and over multiple rooms. The major distinct properties of the database compared to other databases of room impulse responses are:
- Capturing in a high resolution multichannel format (32 channels) from which multiple more limited application-specific formats can be derived (e.g. tetrahedral array, circular array, first-order Ambisonics, higher-order Ambisonics, binaural).
- Extraction of densely spaced SRIRs along measurement trajectories, allowing emulation of moving source scenarios.
- Multiple source distances, azimuths, and elevations from the receiver per room, allowing emulation of complex configurations for multi-source methods.
- Multiple rooms, allowing evaluation of methods at various acoustic conditions, and training of methods with the aim of generalization on different rooms.
The RIRs were collected by staff of TAU between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The data collection received funding from the European Research Council, grant agreement [637422 EVERYSOUND](https://cordis.europa.eu/project/id/637422).
[](https://erc.europa.eu/)
> **NOTE**: This database is a work-in-progress. We intend to publish additional rooms, additional formats, and potentially higher-fidelity versions of the captured responses in the near future, as new versions of the database in this repository.
## Report and reference
A compact description of the dataset, recording setup, recording procedure, and extraction can be found in:
>Politis., Archontis, Adavanne, Sharath, & Virtanen, Tuomas (2020). **A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection**. In _Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020)_, Tokyo, Japan.
available [here](https://dcase.community/documents/workshop2020/proceedings/DCASE2020Workshop_Politis_88.pdf). A more detailed report specifically focusing on the dataset collection and properties will follow.
## Aim
The dataset can be used for generating multichannel or monophonic mixtures for testing or training of methods under realistic reverberation conditions, related to e.g. multichannel speech enhancement, acoustic scene analysis, and machine listening, among others. It is especially suitable for the follow application scenarios:
- monophonic and multichannal reverberant single- or multi-source speech in multi-room reverberant conditions,
- monophonic and multichannel polyphonic sound events in multi-room reverberant conditions,
- single-source and multi-source localization in multi-room reverberant conditions, in static or dynamic scenarios,
- single-source and multi-source tracking in multi-room reverberant conditions, in static or dynamic scenarios,
- sound event localization and detection in multi-room reverberant conditions, in static or dynamic scenarios.
## Specifications
The SRIRs were captured using an [Eigenmike](https://mhacoustics.com/products) spherical microphone array. A [Genelec G Three loudspeaker](https://www.genelec.com/g-three) was used to playback a maximum length sequence (MLS) around the Eigenmike. The SRIRs were obtained in the STFT domain using a least-squares regression between the known measurement signal (MLS) and far-field recording independently at each frequency. In this version of the dataset the SRIRs and ambient noise are downsampled to 24kHz for compactness.
The currently published SRIR set was recorded at nine different indoor locations inside the Tampere University campus at Hervanta, Finland. Additionally, 30 minutes of ambient noise recordings were collected at the same locations with the IR recording setup unchanged. SRIR directions and distances differ with the room. Possible azimuths span the whole range of $\phi\in[-180,180)$, while the elevations span approximately a range between $\theta\in[-45,45]$ degrees. The currently shared measured spaces are as follows:
1. Large open space in underground bomb shelter, with plastic-coated floor and rock walls. Ventilation noise.
2. Large open gym space. Ambience of people using weights and gym equipment in adjacent rooms.
3. Small classroom (PB132) with group work tables and carpet flooring. Ventilation noise.
4. Meeting room (PC226) with hard floor and partially glass walls. Ventilation noise.
5. Lecture hall (SA203) with inclined floor and rows of desks. Ventilation noise.
6. Small classroom (SC203) with group work tables and carpet flooring. Ventilation noise.
7. Large classroom (SE203) with hard floor and rows of desks. Ventilation noise.
8. Lecture hall (TB103) with inclined floor and rows of desks. Ventilation noise.
9. Meeting room (TC352) with hard floor and partially glass walls. Ventilation noise.
The measurement trajectories were organized in groups, with each group being specified by a circular or linear trace at the floor at a certain distance (range) from the z-axis of the microphone. For circular trajectories two ranges were measured, a _close_ and a _far_ one, except room TC352, where the same range was measured twice, but with different furniture configuration and open or closed doors. For linear trajectories also two ranges were measured, _close_ and _far_, but with linear paths at either side of the array, resulting in 4 unique trajectory groups, with the exception of room SA203 where 3 ranges were measurd resulting on 6 trajectory groups. Linear trajectory groups are always parallel to each other, in the same room.
Each trajectory group had multiple measurement trajectories, following the same floor path, but with the source at different heights.
The SRIRs are extracted from the noise recordings of the slowly moving source across those trajectories, at an angular spacing of approximately every 1 degree from the microphone. This extraction scheme instead of extracting SRIRs at equally spaced points along the path (e.g. every 20cm) was found more practical for synthesis purposes, making emulation of moving sources at an approximately constant angular speed easier.
The following table summarizes the above properties for the currently available rooms:
| | Room name | Room type | Traj. type | # ranges | # trajectory groups | # heights/group | # trajectories (total) | # RIRs/DOAs |
|---|--------------------------|----------------------------|------------|-------------|-----------------------|---------------------|------------------------|-------------|
| 1 | Bomb shelter | Complex/semi-open | Circular | 2 | 2 | 9 | 18 | 6480 |
| 2 | Gym | Rectangular/large | Circular | 2 | 2 | 9 | 18 | 6480 |
| 3 | PB132 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
| 4 | PC226 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
| 5 | SA203 Lecture hall | Trapezoidal/large | Linear | 3 | 6 | 3 | 18 | 1594 |
| 6 | SC203 Classroom | Rectangular/medium | Linear | 2 | 4 | 5 | 20 | 1592 |
| 7 | SE203 Classroom | Rectangular/large | Linear | 2 | 4 | 4 | 16 | 1760 |
| 8 | TB103 Classroom | Trapezoidal/large | Linear | 2 | 4 | 3 | 12 | 1184 |
| 9 | TC352 Meeting room | Rectangular/small | Circular | 1 | 2 | 9 | 18 | 6480 |
More details on the trajectory geometries can be found in the database info file (`measinfo.mat`).
## Recording formats
The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $\phi$ and elevation angle $\theta$.
**For the first-order ambisonics (FOA):**
\begin{eqnarray}
H_1(\phi, \theta, f) &=& 1 \\
H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\
H_3(\phi, \theta, f) &=& \sin(\theta) \\
H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta)
\end{eqnarray}
The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above. Routines that can compute the matrix of encoding filters for spherical and general arrays, based on theoretical array models or measurements, can be found [here](https://github.com/polarch/Spherical-Array-Processing).
**For the tetrahedral microphone array (MIC):**
The four microphone have the following positions, in spherical coordinates $(\phi, \theta, r)$:
\begin{eqnarray}
M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\\
M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber
\end{eqnarray}
Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion:
\begin{equation}
H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m))
\end{equation}
where $m$ is the channel number, $(\phi_m, \theta_m)$ are the specific microphone's azimuth and elevation position, $\omega = 2\pi f$ is the angular frequency, $R = 0.042$m is the array radius, $c = 343$m/s is the speed of sound, $\cos(\gamma_m)$ is the cosine angle between the microphone and the DOA, and $P_n$ is the unnormalized Legendre polynomial of degree $n$, and $h_n'^{(2)}$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found [here](https://github.com/polarch/Array-Response-Simulator).
## Reference directions-of-arrival
For each extracted RIR across a measurement trajectory there is a direction-of-arrival (DOA) associated with it, which can be used as the reference direction for sound source spatialized using this RIR, for training or evaluation purposes. The DOAs were determined acoustically from the extracted RIRs, by windowing the direct sound part and applying a broadband version of the MUSIC localization algorithm on the windowed multichannel signal.
The DOAs are provided as Cartesian components [x, y, z] of unit length vectors.
## Scene generator
A set of routines is shared, here termed scene generator, that can spatialize a bank of sound samples using the SRIRs and noise recordings of this library, to emulate scenes for the two target formats. The code is the same as the one used to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset, and has been ported to Python from the original version written in Matlab.
The generator can be found [**here**](https://github.com/danielkrause/DCASE2022-data-generator), along with more details on its use.
The generator at the moment is set to work with the [NIGENS](https://zenodo.org/record/2535878) sound event sample database, and the [FSD50K](https://zenodo.org/record/4060432) sound event database, but additional sample banks can be added with small modifications.
The dataset together with the generator has been used by the authors in the following public challenges:
- [DCASE 2019 Challenge Task 3](https://dcase.community/challenge2019/task-sound-event-localization-and-detection), to generate the **TAU Spatial Sound Events 2019** dataset ([development](https://doi.org/10.5281/zenodo.2599196)/[evaluation](https://doi.org/10.5281/zenodo.3377088))
- [DCASE 2020 Challenge Task 3](https://dcase.community/challenge2020/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2020**](https://doi.org/10.5281/zenodo.4064792) dataset
- [DCASE2021 Challenge Task 3](https://dcase.community/challenge2021/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset
- [DCASE2022 Challenge Task 3](https://dcase.community/challenge2022/task-sound-event-localization-and-detection), to generate additional [SELD synthetic mixtures for training the task baseline](https://doi.org/10.5281/zenodo.6406873)
> **NOTE**: The current version of the generator is work-in-progress, with some code being quite "rough". If something does not work as intended or it is not clear what certain parts do, please contact [daniel.krause@tuni.fi](mailto:daniel.krause@tuni.fi), or [archontis.politis@tuni.fi](mailto:archontis.politis@tuni.fi).
## Dataset structure
The dataset contains a folder of the SRIRs (`TAU-SRIR_DB`), with all the SRIRs per room in a single _mat_ file, e.g. `rirs_09_tb103.mat`. The specific room had 4 trajectory groups measured at 3 different heights, hence the mat file contains an `rirs` array of 4x3 structures, each with the fields `mic` and `foa`. Selecting e.g. the 2nd trajectory and 3rd height with `rirs(2,3)` returns `mic` and `foa` fields with an array of size `[7200x4x114]` on each. The array contains the SRIRs for the specific format, and it is arranged as `[samples x channels x DOAs]`, meaning that 300msec long (7200samples@24kHz) 4 channel RIRs are extracted at 114 positions along that specific trajectory.
The file `rirdata.mat` contains some general information such as sample rate, format specifications, and most importantly the DOAs of every extracted SRIR. Those can be found in the `rirdata.room` field, which is an array of 9 structures itself, one per room. Checking for example `rirdata.room(8)` returns the name of the specific room (_tb103_), the year the measurements were done, the numbers of SRIRs extracted for each trajectory, and finally the DOAs of the extracted SRIRs. The DOAs of a certain trajectory can be retrieved as e.g. `rirdata.room(8).rirs(2,3).doa_xyz` which returns an array of size `[114x3]`. These are the DOAs of the 114 SRIRs retrieved in the previous step for the 2nd trajectory, 3rd source height, of room `TB103`.
The file `measinfo.mat` contains measurement and recording information in each room. Those details are the name of each room, its dimensions for rectangular or trapezoidal shapes, start and end positions for the linear trajectories, or distances from center for the circular ones, the source heights for each trajectory group, the target formats, the trajectory type, the recording device, the A-weighted ambient sound pressure level, and the maximum and minimum A-weighted sound pressure level of the measurement noise signal. Coordinates are defined with respect to the origina being at the base of the microphone. Based on the information included in the `measinfo.mat`, one can plot a 3D arrangement of the trajectories around the microphone, even though keep in mind that these would be the ideal circular or linear intended trajectories, while the actual DOAs obtained from acoustic analysis have some deviations around those ideal paths.
Finally, the dataset contains a folder of spatial ambient noise recordings (`TAU-SNoise_DB`), with one subfolder per room having two audio recordings fo the spatial ambience, one for each format, FOA or MIC. The recordings vary in length between rooms, ranging from about 20 mins to 30 mins. Users of the dataset can segment these recordings and add them to spatialized sound samples at desired SNRs, or mix different segments to augment the recordings to additional ambience than the original recording time. Such a use case is demonstrated in the scene generator examples.
## Download
The files `TAU-SRIR_DB.z01`, ..., `TAU-SRIR_DB.zip` contain the SRIRs and measurement info files.
The files `TAU-SNoise_DB.z01`, ..., `TAU-SNoise_DB.zip` contain the ambient noise recordings.
Download the zip files and use your preferred compression tool to unzip these split zip files. To extract a split zip archive (named as zip, z01, z02, ...), you could use, for example, the following syntax in Linux or OSX terminal:
Combine the split archive to a single archive:
>zip -s 0 split.zip --out single.zip
Extract the single archive using unzip:
>unzip single.zip
# License
The database is published under a custom **open non-commercial with attribution** license. It can be found in the `LICENSE.txt` file that accompanies the data.
|
BritishLibraryLabs/web_archive_classification | 2023-05-04T12:59:29.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
... | BritishLibraryLabs | The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. | TODO | null | 2 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: UK Selective Web Archive Classification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
tags:
- lam
---
# Dataset Card for UK Selective Web Archive Classification Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. In partnership with the Internet Archive and JISC, UKWA had obtained access to the subset of the Internet Archives web collection that relates to the UK. The JISC UK Web Domain Dataset (1996 - 2013) contains all of the resources from the Internet Archive that were hosted on domains ending in .uk, or that are required in order to render those UK pages. UKWA have made this manually-generated classification information available as an open dataset in Tab Separated Values (TSV) format. UKWA is particularly interested in whether high-level metadata like this can be used to train an appropriate automatic classification system so that this manually generated dataset may be used to partially automate the categorisation of the UKWAs larger archives. UKWA expects that an appropriate classifier might require more information about each site in order to produce reliable results, and a future goal is to augment this dataset with further information. Options include: for each site, making the titles of every page on that site available, and for each site, extract a set of keywords that summarise the site, via the full-text index. For more information: http://data.webarchive.org.uk/opendata/ukwa.ds.1/classification/
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Public Domain Mark 1.0.
### Citation Information
[Needs More Information] |
myvision/gender-classification | 2022-04-26T17:46:55.000Z | [
"region:us"
] | myvision | null | null | null | 0 | 3 | Entry not found |
aps/bioasq_task_b | 2022-04-27T19:27:39.000Z | [
"region:us"
] | aps | The data are intended to be used as training and development data for BioASQ
10, which will take place during 2022. There is one file containing the data:
- training10b.json
The file contains the data of the first nine editions of the challenge: 4234
questions [1] with their relevant documents, snippets, concepts and RDF
triples, exact and ideal answers.
Differences with BioASQ-training9b.json
- 492 new questions added from BioASQ9
- The question with id 56c1f01eef6e394741000046 had identical body with
602498cb1cb411341a00009e. All relevant elements from both questions
are available in the merged question with id 602498cb1cb411341a00009e.
- The question with id 5c7039207c78d69471000065 had identical body with
601c317a1cb411341a000014. All relevant elements from both questions
are available in the merged question with id 601c317a1cb411341a000014.
- The question with id 5e4b540b6d0a27794100001c had identical body with
602828b11cb411341a0000fc. All relevant elements from both questions
are available in the merged question with id 602828b11cb411341a0000fc.
- The question with id 5fdb42fba43ad31278000027 had identical body with
5d35eb01b3a638076300000f. All relevant elements from both questions
are available in the merged question with id 5d35eb01b3a638076300000f.
- The question with id 601d76311cb411341a000045 had identical body with
6060732b94d57fd87900003d. All relevant elements from both questions
are available in the merged question with id 6060732b94d57fd87900003d.
[1] 4234 questions : 1252 factoid, 1148 yesno, 1018 summary, 816 list | @article{tsatsaronis2015overview,
title = {
An overview of the BIOASQ large-scale biomedical semantic indexing and
question answering competition
},
author = {
Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
Polychronopoulos, Dimitris and others
},
year = 2015,
journal = {BMC bioinformatics},
publisher = {BioMed Central Ltd},
volume = 16,
number = 1,
pages = 138
} | null | 0 | 3 | Entry not found |
janck/bigscience-lama | 2022-10-21T08:16:23.000Z | [
"task_categories:text-retrieval",
"task_categories:text-classification",
"task_ids:fact-checking-retrieval",
"task_ids:text-scoring",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"probing",
"re... | janck | null | null | null | 0 | 3 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
trex:
- 1M<n<10M
task_categories:
- text-retrieval
- text-classification
task_ids:
- fact-checking-retrieval
- text-scoring
paperswithcode_id: lama
pretty_name: 'LAMA: LAnguage Model Analysis - BigScience version'
tags:
- probing
---
# Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
https://github.com/facebookresearch/LAMA
- **Repository:**
https://github.com/facebookresearch/LAMA
- **Paper:**
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
### Dataset Summary
This dataset provides the data for LAMA. This dataset only contains TRex
(subset of wikidata triples).
The dataset includes some cleanup, and addition of a masked sentence
and associated answers for the [MASK] token. The accuracy in
predicting the [MASK] token shows how well the language model knows
facts and common sense information. The [MASK] tokens are only for the
"object" slots.
This version also contains questions instead of templates that can be used to probe also non-masking models.
See the paper for more details. For more information, also see:
https://github.com/facebookresearch/LAMA
### Languages
en
## Dataset Structure
### Data Instances
The trex config has the following fields:
``
{'uuid': 'a37257ae-4cbb-4309-a78a-623036c96797', 'sub_label': 'Pianos Become the Teeth', 'predicate_id': 'P740', 'obj_label': 'Baltimore', 'template': '[X] was founded in [Y] .', 'type': 'N-1', 'question': 'Where was [X] founded?'}
34039
``
### Data Splits
There are no data splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to probe what language models understand.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created gathered from various other datasets with cleanups for probing.
#### Who are the source language producers?
The LAMA authors and the original authors of the various configs.
### Annotations
#### Annotation process
Human annotations under the original datasets (conceptnet), and various machine annotations.
#### Who are the annotators?
Human annotations and machine annotations.
### Personal and Sensitive Information
Unkown, but likely names of famous people.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is from human annotators, there is likely to be baises.
[More Information Needed]
### Other Known Limitations
The original documentation for the datafields are limited.
## Additional Information
### Dataset Curators
The authors of LAMA at Facebook and the authors of the original datasets.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE
### Citation Information
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
|
NLPC-UOM/Writing-style-classification | 2022-10-25T10:12:46.000Z | [
"task_categories:text-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:si",
"license:mit",
"region:us"
] | NLPC-UOM | null | null | null | 0 | 3 | ---
annotations_creators: []
language_creators:
- crowdsourced
language:
- si
license:
- mit
multilinguality:
- monolingual
pretty_name: sinhala-writing-style-classification
size_categories: []
source_datasets: []
task_categories:
- text-classification
task_ids: []
---
This file contains news texts (sentences) belonging to different writing styles. The original dataset created by {*Upeksha, D., Wijayarathna, C., Siriwardena, M.,
Lasandun, L., Wimalasuriya, C., de Silva, N., and Dias, G. (2015). Implementing a corpus for Sinhala language. 01*}is processed and cleaned.
If you use this dataset, please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} and the above mentioned paper. |
strombergnlp/shaj | 2022-06-14T14:03:37.000Z | [
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"arxiv:2107.13592",
"doi:10.57967/hf/0514",
"region:us"
] | strombergnlp | This is an abusive/offensive language detection dataset for Albanian. The data is formatted
following the OffensEval convention, with three tasks:
* Subtask A: Offensive (OFF) or not (NOT)
* Subtask B: Untargeted (UNT) or targeted insult (TIN)
* Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)
* The subtask A field should always be filled.
* The subtask B field should only be filled if there's "offensive" (OFF) in A.
* The subtask C field should only be filled if there's "targeted" (TIN) in B.
The dataset name is a backronym, also standing for "Spoken Hate in the Albanian Jargon"
See the paper [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592) for full details. | @article{nurce2021detecting,
title={Detecting Abusive Albanian},
author={Nurce, Erida and Keci, Jorgel and Derczynski, Leon},
journal={arXiv preprint arXiv:2107.13592},
year={2021}
} | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- sq
- sq-AL
licenses:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text_classification
task_ids:
- hate-speech-detection
- text-classification-other-hate-speech-detection
paperswithcode_id: shaj
pretty_name: SHAJ
extra_gated_prompt: "Warning: this repository contains harmful content (abusive language, hate speech)."
---
# Dataset Card for "shaj"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1](https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1)
- **Paper:** [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 769.21 KiB
- **Size of the generated dataset:** 1.06 MiB
- **Total amount of disk used:** 1.85 MiB
### Dataset Summary
This is an abusive/offensive language detection dataset for Albanian. The data is formatted
following the OffensEval convention, with three tasks:
* Subtask A: Offensive (OFF) or not (NOT)
* Subtask B: Untargeted (UNT) or targeted insult (TIN)
* Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)
Notes on the above:
* The subtask A field should always be filled.
* The subtask B field should only be filled if there's "offensive" (OFF) in A.
* The subtask C field should only be filled if there's "targeted" (TIN) in B.
The dataset name is a backronym, also standing for "Spoken Hate in the Albanian Jargon"
See the paper [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592) for full details.
### Supported Tasks and Leaderboards
* Task A leaderboard at [paperswithcode.com/sota/hate-speech-detection-on-shaj](https://paperswithcode.com/sota/hate-speech-detection-on-shaj)
### Languages
Albanian (`bcp47:sq-AL`)
## Dataset Structure
### Data Instances
#### shaj
- **Size of downloaded dataset files:** 769.21 KiB
- **Size of the generated dataset:** 1.06 MiB
- **Total amount of disk used:** 1.85 MiB
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'PLACEHOLDER TEXT',
'subtask_a': 1,
'subtask_b': 0,
'subtask_c': 0
}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string`.
- `subtask_a`: whether or not the instance is offensive; `0: OFF, 1: NOT`
- `subtask_b`: whether an offensive instance is a targeted insult; `0: TIN, 1: UNT, 2: not applicable`
- `subtask_c`: what a targeted insult is aimed at; `0: IND, 1: GRP, 2: OTH, 3: not applicable`
### Data Splits
| name |train|
|---------|----:|
|shaj|11874 sentences|
## Dataset Creation
### Curation Rationale
Collecting data for enabling offensive speech detection in Albanian
### Source Data
#### Initial Data Collection and Normalization
The text is scraped from comments on popular Albanian YouTube and Instagram accounts.
An extended discussion is given in the paper in section 3.2.
#### Who are the source language producers?
People who comment on a selection of high-activity Albanian instagram and youtube profiles.
### Annotations
#### Annotation process
The annotation scheme was taken from OffensEval 2019 and applied by two native speaker authors of the paper as well as their friends and family.
#### Who are the annotators?
Albanian native speakers, male and female, aged 20-60.
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@article{nurce2021detecting,
title={Detecting Abusive Albanian},
author={Nurce, Erida and Keci, Jorgel and Derczynski, Leon},
journal={arXiv preprint arXiv:2107.13592},
year={2021}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
strombergnlp/dkstance | 2022-10-25T21:45:42.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"stance-detection",
"region:us"
] | strombergnlp | This dataset presents a series of stories on Reddit and the conversation around
them, annotated for stance. Stories are also annotated for veracity.
For more details see https://aclanthology.org/W19-6122/ | @inproceedings{lillie-etal-2019-joint,
title = "Joint Rumour Stance and Veracity Prediction",
author = "Lillie, Anders Edelbo and
Middelboe, Emil Refsgaard and
Derczynski, Leon",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6122",
pages = "208--221",
} | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
paperswithcode_id: dast
pretty_name: DAST
extra_gated_prompt: 'Warning: the data in this repository contains harmful content
(misinformative claims).'
tags:
- stance-detection
---
# Dataset Card for "dkstance / DAST"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/jointrumourstanceandveracity/](https://stromberg.ai/publication/jointrumourstanceandveracity/)
- **Repository:** [https://figshare.com/articles/dataset/Danish_stance-annotated_Reddit_dataset/8217137](https://figshare.com/articles/dataset/Danish_stance-annotated_Reddit_dataset/8217137)
- **Paper:** [https://aclanthology.org/W19-6122/](https://aclanthology.org/W19-6122/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
This is an SDQC stance-annotated Reddit dataset for the Danish language generated within a thesis project. The dataset consists of over 5000 comments structured as comment trees and linked to 33 source posts.
The dataset is applicable for supervised stance classification and rumour veracity prediction.
### Supported Tasks and Leaderboards
* Stance prediction
### Languages
## Dataset Structure
### Data Instances
#### DAST / dkstance
- **Size of downloaded dataset files:** 4.72 MiB
- **Size of the generated dataset:** 3.69 MiB
- **Total amount of disk used:** 8.41 MiB
An example of 'train' looks as follows.
```
{
'id': '1',
'native_id': 'ebwjq5z',
'text': 'Med de udfordringer som daginstitutionerne har med normeringer, og økonomi i det hele taget, synes jeg det er en vanvittig beslutning at prioritere skattebetalt vegansk kost i daginstitutionerne. Brug dog pengene på noget mere personale, og lad folk selv betale for deres individuelle kostønsker.',
'parent_id': 'a6o3us',
'parent_text': 'Mai Mercado om mad i daginstitutioner: Sund kost rimer ikke på veganer-mad',
'parent_stance': 0,
'source_id': 'a6o3us',
'source_text': 'Mai Mercado om mad i daginstitutioner: Sund kost rimer ikke på veganer-mad',
'source_stance': 0
}
```
### Data Fields
- `id`: a `string` feature.
- `native_id`: a `string` feature representing the native ID of the entry.
- `text`: a `string` of the comment text in which stance is annotated.
- `parent_id`: the `native_id` of this comment's parent.
- `parent_text`: a `string` of the parent comment's text.
- `parent_stance`: the label of the stance in the comment towards its parent comment.
```
0: "Supporting",
1: "Denying",
2: "Querying",
3: "Commenting",
```
- `source_id`: the `native_id` of this comment's source / post.
- `source_text`: a `string` of the source / post text.
- `source_stance`: the label of the stance in the comment towards the original source post.
```
0: "Supporting",
1: "Denying",
2: "Querying",
3: "Commenting",
```
### Data Splits
| name |size|
|---------|----:|
|train|3122|
|validation|1066|
|test|1060|
These splits are specified after the original reserach was reported. The splits add an extra level of rigour, in that no source posts' comment tree is spread over more than one partition.
## Dataset Creation
### Curation Rationale
Comments around rumourous claims to enable rumour and stance analysis in Danish
### Source Data
#### Initial Data Collection and Normalization
The data is from Reddit posts that relate to one of a specific set of news stories; these stories are enumerated in the paper.
#### Who are the source language producers?
Danish-speaking Twitter users.
### Annotations
#### Annotation process
There was multi-user annotation process mediated through a purpose-built interface for annotating stance in Reddit threads.
#### Who are the annotators?
* Age: 20-30.
* Gender: male.
* Race/ethnicity: white northern European.
* Native language: Danish.
* Socioeconomic status: higher education student.
### Personal and Sensitive Information
The data was public at the time of collection. User names are not preserved.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
The source of the text has a strong demographic bias, being mostly young white men who are vocal their opinions. This constrains both the styles of language and discussion contained in the data, as well as the topics discussed and viewpoints held.
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
An NLP data statement is included in the paper describing the work, [https://aclanthology.org/W19-6122.pdf](https://aclanthology.org/W19-6122.pdf)
### Citation Information
```
@inproceedings{lillie-etal-2019-joint,
title = "Joint Rumour Stance and Veracity Prediction",
author = "Lillie, Anders Edelbo and
Middelboe, Emil Refsgaard and
Derczynski, Leon",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6122",
pages = "208--221",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
strombergnlp/polstance | 2022-10-25T21:42:18.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"stance-detection",
"region:us"
] | strombergnlp | Political stance in Danish. Examples represent statements by
politicians and are annotated for, against, or neutral to a given topic/article. | @inproceedings{lehmann2019political,
title={Political Stance in Danish},
author={Lehmann, Rasmus and Derczynski, Leon},
booktitle={Proceedings of the 22nd Nordic Conference on Computational Linguistics},
pages={197--207},
year={2019}
} | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-analysis
paperswithcode_id: polstance
pretty_name: Political Stance for Danish
tags:
- stance-detection
---
# Dataset Card for "polstance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/politicalstanceindanish/](https://stromberg.ai/publication/politicalstanceindanish/)
- **Repository:** [https://github.com/StrombergNLP/Political-Stance-in-Danish/](https://github.com/StrombergNLP/Political-Stance-in-Danish/)
- **Paper:** [https://aclanthology.org/W19-6121/](https://aclanthology.org/W19-6121/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 548 KB
- **Size of the generated dataset:** 222 KB
- **Total amount of disk used:** 770 KB
### Dataset Summary
Political stance in Danish. Examples represent statements by
politicians and are annotated for, against, or neutral to a given topic/article.
### Supported Tasks and Leaderboards
*
### Languages
Danish, bcp47: `da-DK`
## Dataset Structure
### Data Instances
#### polstance
An example of 'train' looks as follows.
```
{
'id': '0',
'topic': 'integration',
'quote': 'Der kunne jeg godt tænke mig, at der stod mere eksplicit, at de (landene, red.) skal bekæmpe menneskesmuglere og tage imod deres egne borgere',
'label': 2,
'quoteID': '516',
'party': 'Det Konservative Folkeparti',
'politician': 'Naser Khader',
}
```
### Data Fields
- `id`: a `string` feature.
- `topic`: a `string` expressing a topic.
- `quote`: a `string` to be classified for its stance to the topic.
- `label`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "against",
1: "neutral",
2: "for",
```
- `quoteID`: a `string` of the internal quote ID.
- `party`: a `string` describing the party affiliation of the quote utterer at the time of utterance.
- `politician`: a `string` naming the politician who uttered the quote.
### Data Splits
| name |train|
|---------|----:|
|polstance|900 sentences|
## Dataset Creation
### Curation Rationale
Collection of quotes from politicians to allow detecting how political quotes orient to issues.
### Source Data
#### Initial Data Collection and Normalization
The data is taken from proceedings of the Danish parliament, the Folketing - [ft.dk](https://ft.dk).
#### Who are the source language producers?
Danish polticians
### Annotations
#### Annotation process
Annotators labelled comments for being against, neutral, or for a specified topic
#### Who are the annotators?
Danish native speakers, 20s, male, studying Software Design.
### Personal and Sensitive Information
The data was public at the time of collection and will remain open public record by law in Denmark.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{lehmann2019political,
title={Political Stance in Danish},
author={Lehmann, Rasmus and Derczynski, Leon},
booktitle={Proceedings of the 22nd Nordic Conference on Computational Linguistics},
pages={197--207},
year={2019}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
strombergnlp/zulu_stance | 2022-10-25T21:46:14.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zu",
"license:cc-by-4.0",
"st... | strombergnlp | This is a stance detection dataset in the Zulu language. The data is translated to Zulu by Zulu native speakers, from English source texts.
Misinformation has become a major concern in recent last years given its
spread across our information sources. In the past years, many NLP tasks have
been introduced in this area, with some systems reaching good results on
English language datasets. Existing AI based approaches for fighting
misinformation in literature suggest automatic stance detection as an integral
first step to success. Our paper aims at utilizing this progress made for
English to transfers that knowledge into other languages, which is a
non-trivial task due to the domain gap between English and the target
languages. We propose a black-box non-intrusive method that utilizes techniques
from Domain Adaptation to reduce the domain gap, without requiring any human
expertise in the target language, by leveraging low-quality data in both a
supervised and unsupervised manner. This allows us to rapidly achieve similar
results for stance detection for the Zulu language, the target language in
this work, as are found for English. We also provide a stance detection dataset
in the Zulu language. | @inproceedings{dlamini_zulu_stance,
title={Bridging the Domain Gap for Stance Detection for the Zulu language},
author={Dlamini, Gcinizwe and Bekkouch, Imad Eddine Ibrahim and Khan, Adil and Derczynski, Leon},
booktitle={Proceedings of IEEE IntelliSys},
year={2022}
} | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- zu
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
- sentiment-classification
paperswithcode_id: zulu-stance
pretty_name: ZUstance
tags:
- stance-detection
---
# Dataset Card for "zulu-stance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://arxiv.org/abs/2205.03153](https://arxiv.org/abs/2205.03153)
- **Repository:**
- **Paper:** [https://arxiv.org/pdf/2205.03153](https://arxiv.org/pdf/2205.03153)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 212.54 KiB
- **Size of the generated dataset:** 186.76 KiB
- **Total amount of disk used:** 399.30KiB
### Dataset Summary
This is a stance detection dataset in the Zulu language. The data is translated to Zulu by Zulu native speakers, from English source texts.
Our paper aims at utilizing this progress made for English to transfers that knowledge into other languages, which is a non-trivial task due to the domain gap between English and the target languages. We propose a black-box non-intrusive method that utilizes techniques from Domain Adaptation to reduce the domain gap, without requiring any human expertise in the target language, by leveraging low-quality data in both a supervised and unsupervised manner. This allows us to rapidly achieve similar results for stance detection for the Zulu language, the target language in this work, as are found for English. A natively-translated dataset is used for evaluation of domain transfer.
### Supported Tasks and Leaderboards
*
### Languages
Zulu (`bcp47:zu`)
## Dataset Structure
### Data Instances
#### zulu_stance
- **Size of downloaded dataset files:** 212.54 KiB
- **Size of the generated dataset:** 186.76 KiB
- **Total amount of disk used:** 399.30KiB
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'ubukhulu be-islam buba sobala lapho i-smartphone ifaka i-ramayana njengo-ramadan. #semst',
'target': 'Atheism',
'stance': 1}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string` expressing a stance.
- `target`: a `string` of the target/topic annotated here.
- `stance`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "FAVOR",
1: "AGAINST",
2: "NONE",
```
### Data Splits
| name |train|
|---------|----:|
|zulu_stance|1343 sentences|
## Dataset Creation
### Curation Rationale
To enable stance detection in Zulu and also to measure domain transfer in translation
### Source Data
#### Initial Data Collection and Normalization
The original data is taken from [Semeval2016 task 6: Detecting stance in tweets.](https://aclanthology.org/S16-1003/),
and then translated manually to Zulu.
#### Who are the source language producers?
English-speaking Twitter users.
### Annotations
#### Annotation process
See [Semeval2016 task 6: Detecting stance in tweets.](https://aclanthology.org/S16-1003/); the annotations are taken from there.
#### Who are the annotators?
See [Semeval2016 task 6: Detecting stance in tweets.](https://aclanthology.org/S16-1003/); the annotations are taken from there.
### Personal and Sensitive Information
The data was public at the time of collection. User names are preserved.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
While the data is in Zulu, the source text is not from or about Zulu-speakers, and so still expresses the social biases and topics found in English-speaking Twitter users. Further, some of the topics are USA-specific. The sentiments and ideas in this dataset do not represent Zulu speakers.
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{dlamini_zulu_stance,
title={Bridging the Domain Gap for Stance Detection for the Zulu language},
author={Dlamini, Gcinizwe and Bekkouch, Imad Eddine Ibrahim and Khan, Adil and Derczynski, Leon},
booktitle={Proceedings of IEEE IntelliSys},
year={2022}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
strombergnlp/rustance | 2022-10-25T21:46:32.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ru",
"license:cc-by-4.0",
"stance... | strombergnlp | This is a stance prediction dataset in Russian. The dataset contains comments on news articles,
and rows are a comment, the title of the news article it responds to, and the stance of the comment
towards the article. | @inproceedings{lozhnikov2018stance,
title={Stance prediction for Russian: data and analysis},
author={Lozhnikov, Nikita and Derczynski, Leon and Mazzara, Manuel},
booktitle={International Conference in Software Engineering for Defence Applications},
pages={176--186},
year={2018},
organization={Springer}
} | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ru
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
- sentiment-classification
paperswithcode_id: rustance
pretty_name: RuStance
tags:
- stance-detection
---
# Dataset Card for "rustance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://figshare.com/articles/dataset/dataset_csv/7151906](https://figshare.com/articles/dataset/dataset_csv/7151906)
- **Repository:** [https://github.com/StrombergNLP/rustance](https://github.com/StrombergNLP/rustance)
- **Paper:** [https://link.springer.com/chapter/10.1007/978-3-030-14687-0_16](https://link.springer.com/chapter/10.1007/978-3-030-14687-0_16), [https://arxiv.org/abs/1809.01574](https://arxiv.org/abs/1809.01574)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 212.54 KiB
- **Size of the generated dataset:** 186.76 KiB
- **Total amount of disk used:** 399.30KiB
### Dataset Summary
This is a stance prediction dataset in Russian. The dataset contains comments on news articles,
and rows are a comment, the title of the news article it responds to, and the stance of the comment
towards the article.
Stance detection is a critical component of rumour and fake news identification. It involves the extraction of the stance a particular author takes related to a given claim, both expressed in text. This paper investigates stance classification for Russian. It introduces a new dataset, RuStance, of Russian tweets and news comments from multiple sources, covering multiple stories, as well as text classification approaches to stance detection as benchmarks over this data in this language. As well as presenting this openly-available dataset, the first of its kind for Russian, the paper presents a baseline for stance prediction in the language.
### Supported Tasks and Leaderboards
* Stance Detection: [Stance Detection on RuStance](https://paperswithcode.com/sota/stance-detection-on-rustance)
### Languages
Russian, as spoken on the Meduza website (i.e. from multiple countries) (`bcp47:ru`)
## Dataset Structure
### Data Instances
#### rustance
- **Size of downloaded dataset files:** 349.79 KiB
- **Size of the generated dataset:** 366.11 KiB
- **Total amount of disk used:** 715.90 KiB
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'Волки, волки!!',
'title': 'Минобороны обвинило «гражданского сотрудника» в публикации скриншота из игры вместо фото террористов. И показало новое «неоспоримое подтверждение»',
'stance': 3
}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string` expressing a stance.
- `title`: a `string` of the target/topic annotated here.
- `stance`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "support",
1: "deny",
2: "query",
3: "comment",
```
### Data Splits
| name |train|
|---------|----:|
|rustance|958 sentences|
## Dataset Creation
### Curation Rationale
Toy data for training and especially evaluating stance prediction in Russian
### Source Data
#### Initial Data Collection and Normalization
The data is comments scraped from a Russian news site not situated in Russia, [Meduza](https://meduza.io/), in 2018.
#### Who are the source language producers?
Russian speakers including from the Russian diaspora, especially Latvia
### Annotations
#### Annotation process
Annotators labelled comments for supporting, denying, querying or just commenting on a news article.
#### Who are the annotators?
Russian native speakers, IT education, male, 20s.
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of misinformative content being in this data. The data has NOT been vetted for any content.
### Discussion of Biases
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{lozhnikov2018stance,
title={Stance prediction for russian: data and analysis},
author={Lozhnikov, Nikita and Derczynski, Leon and Mazzara, Manuel},
booktitle={International Conference in Software Engineering for Defence Applications},
pages={176--186},
year={2018},
organization={Springer}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
RuiqianLi/Li_singlish | 2022-05-23T05:34:24.000Z | [
"license:apache-2.0",
"region:us"
] | RuiqianLi | This is a public domain speech dataset consisting of 3579 short audio clips of singlish | @misc{RuiqianLi,
author = {Ruiqian LI},
title = {The Singlish Speech Dataset},
year = 2022
} | null | 0 | 3 | ---
license: apache-2.0
---
training dataset:
Dataset({
features: ['id', 'audio', 'file', 'text'],
num_rows: 2700
})
{'id': '0',
'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/73016598ed29609d09a2c3c087d4e70e73dc549331efa2117aa6ec012d1ace35/singlish/train/0.wav', 'array': array([-9.1552734e-05, 2.7465820e-04, 8.2397461e-04, ...,
-1.3732910e-03, -3.9672852e-04, -7.6293945e-04], dtype=float32), 'sampling_rate': 16000},
'text':'a group of boys then challenged him to climb over the railing and stand on the parapet below'
'file':'/root/.cache/huggingface/datasets/downloads/extracted/73016598ed29609d09a2c3c087d4e70e73dc549331efa2117aa6ec012d1ace35/singlish/train/0.wav'
}
<class 'datasets.arrow_dataset.Dataset'> |
strombergnlp/rumoureval_2019 | 2022-10-25T21:43:58.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"stance-detection",
"arxiv:1809.06683",
"region:us"
] | strombergnlp |
Stance prediction task in English. The goal is to predict whether a given reply to a claim either supports, denies, questions, or simply comments on the claim. Ran as a SemEval task in 2019. | @inproceedings{gorrell-etal-2019-semeval,
title = "{S}em{E}val-2019 Task 7: {R}umour{E}val, Determining Rumour Veracity and Support for Rumours",
author = "Gorrell, Genevieve and
Kochkina, Elena and
Liakata, Maria and
Aker, Ahmet and
Zubiaga, Arkaitz and
Bontcheva, Kalina and
Derczynski, Leon",
booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S19-2147",
doi = "10.18653/v1/S19-2147",
pages = "845--854",
} | null | 2 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-classification
task_ids:
- fact-checking
pretty_name: RumourEval 2019
tags:
- stance-detection
---
# Dataset Card for "rumoureval_2019"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://competitions.codalab.org/competitions/19938](https://competitions.codalab.org/competitions/19938)
- **Repository:** [https://figshare.com/articles/dataset/RumourEval_2019_data/8845580](https://figshare.com/articles/dataset/RumourEval_2019_data/8845580)
- **Paper:** [https://aclanthology.org/S19-2147/](https://aclanthology.org/S19-2147/), [https://arxiv.org/abs/1809.06683](https://arxiv.org/abs/1809.06683)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
Stance prediction task in English. The goal is to predict whether a given reply to a claim either supports, denies, questions, or simply comments on the claim. Ran as a SemEval task in 2019.
### Supported Tasks and Leaderboards
* SemEval 2019 task 1
### Languages
English of various origins, bcp47: `en`
## Dataset Structure
### Data Instances
#### polstance
An example of 'train' looks as follows.
```
{
'id': '0',
'source_text': 'Appalled by the attack on Charlie Hebdo in Paris, 10 - probably journalists - now confirmed dead. An attack on free speech everywhere.',
'reply_text': '@m33ryg @tnewtondunn @mehdirhasan Of course it is free speech, that\'s the definition of "free speech" to openly make comments or draw a pic!',
'label': 3
}
```
### Data Fields
- `id`: a `string` feature.
- `source_text`: a `string` expressing a claim/topic.
- `reply_text`: a `string` to be classified for its stance to the source.
- `label`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "support",
1: "deny",
2: "query",
3: "comment"
```
- `quoteID`: a `string` of the internal quote ID.
- `party`: a `string` describing the party affiliation of the quote utterer at the time of utterance.
- `politician`: a `string` naming the politician who uttered the quote.
### Data Splits
| name |instances|
|---------|----:|
|train|7 005|
|dev|2 425|
|test|2 945|
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Twitter users
### Annotations
#### Annotation process
Detailed in [Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads](https://journals.plos.org/plosone/article/authors?id=10.1371/journal.pone.0150989)
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{gorrell-etal-2019-semeval,
title = "{S}em{E}val-2019 Task 7: {R}umour{E}val, Determining Rumour Veracity and Support for Rumours",
author = "Gorrell, Genevieve and
Kochkina, Elena and
Liakata, Maria and
Aker, Ahmet and
Zubiaga, Arkaitz and
Bontcheva, Kalina and
Derczynski, Leon",
booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S19-2147",
doi = "10.18653/v1/S19-2147",
pages = "845--854",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
morteza/cogtext | 2023-06-09T08:52:00.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:semantic-similarity-classification",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc... | morteza | CogText dataset contains a collection of PubMed abstracts, along with their GPT-3 embeddings and topic embeddings. | @misc{cogtext2022,
author = {Morteza Ansarinia and
Paul Schrater and
Pedro Cardoso-Leite},
title = {Linking Theories and Methods in Cognitive Sciences via Joint Embedding of the Scientific Literature: The Example of Cognitive Control},
year = {2022},
url = {https://arxiv.org/abs/2203.11016}
} | null | 0 | 3 | ---
pretty_name: CogText PubMed Abstracts
license:
- cc-by-4.0
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- topic-classification
- semantic-similarity-classification
size_categories:
- 100K<n<1M
paperswithcode_id: linking-theories-and-methods-in-cognitive
inference: false
model-index:
- name: cogtext-pubmed
results: []
source_datasets:
- original
language_creators:
- found
- expert-generated
configs:
- config_name: pubmed
- config_name: pubmed20pct
- config_name: lexicon
- config_name: pubmed_gp3ada
tags:
- Cognitive Control
- PubMed
---
# Dataset Card for CogText PubMed Abstracts
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
**CogText** dataset contains a collection of PubMed abstracts, along with their GPT-3 embeddings and topic embeddings. See [CogText on GitHub](https://github.com/morteza/cogtext) for the details and codes.
- **Homepage:** https://github.com/morteza/cogtext
- **Repository:** https://github.com/morteza/cogtext
- **Point of Contact:** [Morteza Ansarinia](mailto:ansarinia@me.com)
- **Paper:** https://arxiv.org/abs/2203.11016
### Dataset Summary
The dataset consists of 385,705 unique scientific articles that were retrieved from PubMed in December 2021. Each item includes title, abstract, some metadata,
and embeddings generated by both GPT-3 and Top2Vec. These texts were selected based on their relevance to the cognitive control constructs or related tasks.
### Supported Tasks and Leaderboards
Topic Modeling, Text Embedding
### Languages
English
## Dataset Structure
### Data Instances
522,972 scientific articles, of which 385,705 are unique.
### Data Fields
The CSV files contain the following fields:
| Field | Description |
| ----- | ----------- |
| `index` | (int) Index of the article in the current dataset |
| `pmid` | (int) PubMed ID |
| `doi` | (str) Digital Object Identifier |
| `year` | (int) Year of publication (yyyy format)|
| `journal_title` | (str) Title of the journal |
| `journal_iso_abbreviation` | (str) ISO abbreviation of the journal |
| `title` | (str) Title of the article |
| `abstract` | (str) Abstract of the article |
| `category` | (enum) Category of the article, either "CognitiveTask" or "CognitiveConstruct" |
| `label` | (enum) Label of the article, which refers to the class labels in the `ontologies/efo.owl` ontology |
| `original_index` | (int) Index of the article in the full dataset (see `pubmed/abstracts.csv.gz`) |
### Data Splits
| Dataset | Description |
| ------- | ----------- |
| `pubmed/abstracts.csv.gz` | Full dataset |
| `pubmed/abstracts20pct.csv.gz` | 20% of the dataset (stratified random sample by `label`) |
| `gpt3/abstracts_gp3ada.nc` | GPT-3 embeddings of the entire dataset in XArray/CDF4 format, indexed by `pmid` |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Acknowledgments
This research was supported by the Luxembourg National Research Fund (ATTRACT/2016/ID/11242114/DIGILEARN and INTER Mobility/2017-2/ID/11765868/ULALA).
### Citation Information
To cite the paper use the following entry:
```
@misc{cogtext2022,
author = {Morteza Ansarinia and
Paul Schrater and
Pedro Cardoso-Leite},
title = {Linking Theories and Methods in Cognitive Sciences via Joint Embedding of the Scientific Literature: The Example of Cognitive Control},
year = {2022},
url = {https://arxiv.org/abs/2203.11016}
}
``` |
Iyanuoluwa/YOSM | 2023-01-10T06:28:01.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:yo",
"license:unknown",
"movie reviews",
"nollywood",
"arxi... | Iyanuoluwa | YOSM: A NEW YORUBA SENTIMENT CORPUS FOR MOVIE REVIEWS
- Yoruba | @inproceedings{
shode2022yosm,
title={{YOSM}: A {NEW} {YORUBA} {SENTIMENT} {CORPUS} {FOR} {MOVIE} {REVIEWS}},
author={Iyanuoluwa Shode and David Ifeoluwa Adelani and Anna Feldman},
booktitle={3rd Workshop on African Natural Language Processing},
year={2022},
url={https://openreview.net/forum?id=rRzx5qzVIb9}
} | null | 0 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- yo
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- movie reviews
- nollywood
task_categories:
- text-classification
task_ids:
- sentiment-analysis
---
# Dataset Card for YOSM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [Iyanuoluwa/YOSM](https://github.com/IyanuSh/YOSM)
- **Paper:** [A new Yorùbá Sentiment Corpus for Nigerian/Nollywood Movie Reviews](https://arxiv.org/pdf/2204.09711.pdf)
- **Point of Contact:** [Iyanuoluwa Shode](mailto:shodei1@montclair.edu)
### Dataset Summary
YOSM is the first Yorùbá sentiment corpus for Nollywood movie reviews. The reviews were collected from movie reviews websites - IMDB, Rotten Tomatoes, LetterboxD, Cinemapointer, and Nollyrated.
### Languages
Yorùbá (ISO 639-1: yo) - the third most spoken indigenous African language with over 50 million speakers.
## Dataset Structure
### Data Instances
An instance consists of a movie review and the corresponding class label.
### Data Fields
- `yo_review`: A movie review in Yorùbá
- `sentiment`: The label describing the sentiment of the movie review.
### Data Splits
The YOSM dataset has 3 splits: _train_, _dev_, and _test_. Below are the statistics for Version 3.0.0 of the dataset.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 800 |
| Development | 200 |
| Test | 500 |
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
|
bigscience-data/roots_ar_arabench | 2022-12-12T10:59:54.000Z | [
"language:ar",
"license:apache-2.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 3 | ---
language: ar
license: apache-2.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_arabench
# arabench
- Dataset uid: `arabench`
### Description
AraBench is an evaluation suite for dialectal Arabic to English machine translation. AraBench offers 4 coarse, 15 fine-grained and 25 city-level dialect categories, belonging to diverse genres, such as media, chat, religion and travel with varying level of dialectness.
### Homepage
https://alt.qcri.org/resources1/mt/arabench/
### Licensing
- open license
- cc-by-4.0: Creative Commons Attribution 4.0 International
### Speaker Locations
- Northern Africa
- Western Asia
- Algeria
- Egypt
- Morocco
- Jordan
- Sudan
- Tunisia
- Lebanon
- Libya
- Iraq
- Qatar
- Yemen
- Oman
- Saudi Arabia
- Syria
- Palestine
### Sizes
- 0.0018 % of total
- 0.0165 % of ar
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
bigscience-data/roots_ar_labr | 2022-12-12T10:59:59.000Z | [
"language:ar",
"license:gpl-2.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 3 | ---
language: ar
license: gpl-2.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_labr
# labr
- Dataset uid: `labr`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0076 % of total
- 0.0701 % of ar
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
bigscience-data/roots_es_ted_talks_iwslt | 2022-12-12T11:03:33.000Z | [
"language:es",
"license:cc-by-nc-nd-4.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 3 | ---
language: es
license: cc-by-nc-nd-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_es_ted_talks_iwslt
# WIT Ted Talks
- Dataset uid: `ted_talks_iwslt`
### Description
The Web Inventory Talk is a collection of the original Ted talks and their translated version. The translations are available in more than 109+ languages, though the distribution is not uniform.
### Homepage
https://github.com/huggingface/datasets/blob/master/datasets/ted_talks_iwslt/README.md
### Licensing
- open license
- cc-by-nc-4.0: Creative Commons Attribution Non Commercial 4.0 International
TED makes its collection of video recordings and transcripts of talks available under the Creative Commons BY-NC-ND license (look here). WIT3 acknowledges the authorship of TED talks (BY condition) and does not redistribute transcripts for commercial purposes (NC). As regards the integrity of the work (ND), WIT3 only changes the format of the container, while preserving the original contents. WIT3 aims to support research on human language processing as well as the diffusion of TED Talks!
### Speaker Locations
- Southern Europe
- Italy
### Sizes
- 0.0305 % of total
- 0.0736 % of ar
- 0.2002 % of pt
- 0.0128 % of zh
- 0.2236 % of vi
- 0.0330 % of fr
- 0.0545 % of es
- 0.0122 % of en
- 0.3704 % of id
- 0.0373 % of indic-hi
- 0.0330 % of indic-ta
- 0.1393 % of indic-mr
- 0.0305 % of ca
- 0.1179 % of indic-ur
- 0.0147 % of indic-bn
- 0.0240 % of indic-ml
- 0.0244 % of indic-te
- 0.0503 % of indic-gu
- 0.0211 % of indic-kn
- 0.0274 % of eu
- 0.0023 % of indic-as
- 0.0001 % of indic-pa
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ca
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ur
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-as
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-pa
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
bigscience-data/roots_es_wikibooks | 2022-12-12T11:03:39.000Z | [
"language:es",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 3 | ---
language: es
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_es_wikibooks
# wikibooks_filtered
- Dataset uid: `wikibooks_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0897 % of total
- 0.2591 % of en
- 0.0965 % of fr
- 0.1691 % of es
- 0.2834 % of indic-hi
- 0.2172 % of pt
- 0.0149 % of zh
- 0.0279 % of ar
- 0.1374 % of vi
- 0.5025 % of id
- 0.3694 % of indic-ur
- 0.5744 % of eu
- 0.0769 % of ca
- 0.0519 % of indic-ta
- 0.1470 % of indic-mr
- 0.0751 % of indic-te
- 0.0156 % of indic-bn
- 0.0476 % of indic-ml
- 0.0087 % of indic-pa
### BigScience processing steps
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_fr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-hi
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_vi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_id
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_eu
- dedup_template_soft
- replace_newline_with_space
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-mr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-te
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-bn
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ml
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-pa
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
|
bigscience-data/roots_es_the_pile_europarl | 2022-12-12T11:03:44.000Z | [
"language:es",
"license:mit",
"region:us"
] | bigscience-data | null | null | null | 0 | 3 | ---
language: es
license: mit
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_es_the_pile_europarl
# the_pile_europarl
- Dataset uid: `the_pile_europarl`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.1278 % of total
- 0.4112 % of fr
- 1.5555 % of pt
- 0.7511 % of es
- 0.1503 % of en
### BigScience processing steps
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
|
bigscience-data/roots_es_wikinews | 2022-12-12T11:03:49.000Z | [
"language:es",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 3 | ---
language: es
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_es_wikinews
# wikinews_filtered
- Dataset uid: `wikinews_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0307 % of total
- 0.0701 % of ar
- 0.3036 % of pt
- 0.0271 % of en
- 0.0405 % of fr
- 0.2119 % of indic-ta
- 0.0081 % of zh
- 0.0510 % of es
- 0.0725 % of ca
### BigScience processing steps
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
|
bigscience-data/roots_es_wikisource | 2022-12-12T11:04:20.000Z | [
"language:es",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 3 | ---
language: es
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_es_wikisource
# wikisource_filtered
- Dataset uid: `wikisource_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.6306 % of total
- 12.7884 % of fr
- 19.8886 % of indic-bn
- 20.9966 % of indic-ta
- 2.3478 % of ar
- 4.7068 % of indic-hi
- 18.0998 % of indic-te
- 1.7155 % of es
- 19.4800 % of indic-kn
- 9.1737 % of indic-ml
- 17.1771 % of indic-mr
- 17.1870 % of indic-gu
- 70.3687 % of indic-as
- 1.0165 % of pt
- 7.8642 % of indic-pa
- 1.3501 % of vi
- 4.9411 % of indic-or
- 0.5307 % of ca
- 2.3593 % of id
- 1.5928 % of eu
### BigScience processing steps
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- remove_wiki_mojibake
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
|
bigscience-data/roots_id_indonesian_news_corpus | 2022-12-12T11:05:28.000Z | [
"language:id",
"license:cc-by-4.0",
"region:us"
] | bigscience-data | null | null | null | 1 | 3 | ---
language: id
license: cc-by-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_id_indonesian_news_corpus
# Indonesian News Corpus
- Dataset uid: `indonesian_news_corpus`
### Description
Crawled news in 2015 from:
- kompas.com
- tempo.co
- merdeka.com
- republika.co.id
- viva.co.id
- tribunnews.com
### Homepage
https://data.mendeley.com/datasets/2zpbjs22k3/1
### Licensing
- open license
- cc-by-4.0: Creative Commons Attribution 4.0 International
### Speaker Locations
- South-eastern Asia
- Indonesia
### Sizes
- 0.0172 % of total
- 6.5603 % of id
### BigScience processing steps
#### Filters applied to: id
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
bigscience-data/roots_id_indosum | 2022-12-12T11:05:45.000Z | [
"language:id",
"license:apache-2.0",
"region:us"
] | bigscience-data | null | null | null | 1 | 3 | ---
language: id
license: apache-2.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_id_indosum
# Indosum
- Dataset uid: `indosum`
### Description
IndoSum: A New Benchmark Dataset for Indonesian Text Summarization
### Homepage
https://github.com/kata-ai/indosum
### Licensing
- apache-2.0: Apache License 2.0
Apache License, Version 2.0 Apache License Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and
configuration files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object
code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form,
made available under the License, as indicated by a copyright notice that is
included in or attached to the work (an example is provided in the Appendix
below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original
version of the Work and any modifications or additions to that Work or
Derivative Works thereof, that is intentionally submitted to Licensor for
inclusion in the Work by the copyright owner or by an individual or Legal
Entity authorized to submit on behalf of the copyright owner. For the purposes
of this definition, "submitted" means any form of electronic, verbal, or
written communication sent to the Licensor or its representatives, including
but not limited to communication on electronic mailing lists, source code
control systems, and issue tracking systems that are managed by, or on behalf
of, the Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise designated in
writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and
such Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by combination
of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or contributory
patent infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof
in any medium, with or without modifications, and in Source or Object form,
provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of
this License; and You must cause any modified files to carry prominent notices
stating that You changed the files; and You must retain, in the Source form of
any Derivative Works that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work, excluding those notices
that do not pertain to any part of the Derivative Works; and If the Work
includes a "NOTICE" text file as part of its distribution, then any Derivative
Works that You distribute must include a readable copy of the attribution
notices contained within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one of the following
places: within a NOTICE text file distributed as part of the Derivative Works;
within the Source form or documentation, if provided along with the Derivative
Works; or, within a display generated by the Derivative Works, if and wherever
such third-party notices normally appear. The contents of the NOTICE file are
for informational purposes only and do not modify the License. You may add Your
own attribution notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided that such
additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a
whole, provided Your use, reproduction, and distribution of the Work otherwise
complies with the conditions stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted
for inclusion in the Work by You to the Licensor shall be under the terms and
conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms
of any separate license agreement you may have executed with Licensor regarding
such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides
the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of
permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this License
or out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or malfunction,
or any and all other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose to
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License.
However, in accepting such obligations, You may act only on Your own behalf and
on Your sole responsibility, not on behalf of any other Contributor, and only
if You agree to indemnify, defend, and hold each Contributor harmless for any
liability incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier identification
within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
### Speaker Locations
- South-eastern Asia
- Indonesia
### Sizes
- 0.0035 % of total
- 1.3157 % of id
### BigScience processing steps
#### Filters applied to: id
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
bigscience-data/roots_vi_vietnamese_poetry | 2022-12-12T11:16:41.000Z | [
"language:vi",
"license:mit",
"region:us"
] | bigscience-data | null | null | null | 0 | 3 | ---
language: vi
license: mit
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_vi_vietnamese_poetry
# Vietnamese poetry from fsoft AI lab
- Dataset uid: `vietnamese_poetry`
### Description
171188 poems with different genres: luc-bat, 5-chu, 7-chu, 8-chu, 4-chu
### Homepage
https://github.com/fsoft-ailab/Poem-Generator#dataset
### Licensing
- open license
- mit: MIT License
### Speaker Locations
- South-eastern Asia
- Vietnam
### Sizes
- 0.0127 % of total
- 0.9285 % of vi
### BigScience processing steps
#### Filters applied to: vi
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
bigscience-data/roots_vi_wikipedia | 2022-12-12T11:16:52.000Z | [
"language:vi",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 3 | ---
language: vi
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_vi_wikipedia
# wikipedia
- Dataset uid: `wikipedia`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 3.2299 % of total
- 4.2071 % of en
- 5.6773 % of ar
- 3.3416 % of fr
- 5.2815 % of es
- 12.4852 % of ca
- 0.4288 % of zh
- 0.4286 % of zh
- 5.4743 % of indic-bn
- 8.9062 % of indic-ta
- 21.3313 % of indic-te
- 4.4845 % of pt
- 4.0493 % of indic-hi
- 11.3163 % of indic-ml
- 22.5300 % of indic-ur
- 4.4902 % of vi
- 16.9916 % of indic-kn
- 24.7820 % of eu
- 11.6241 % of indic-mr
- 9.8749 % of id
- 9.3489 % of indic-pa
- 9.4767 % of indic-gu
- 24.1132 % of indic-as
- 5.3309 % of indic-or
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
#### Filters applied to: zh
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-or
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
|
bigscience-data/roots_zh_ted_talks_iwslt | 2022-12-12T11:17:13.000Z | [
"language:zh",
"license:cc-by-nc-nd-4.0",
"region:us"
] | bigscience-data | null | null | null | 1 | 3 | ---
language: zh
license: cc-by-nc-nd-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh_ted_talks_iwslt
# WIT Ted Talks
- Dataset uid: `ted_talks_iwslt`
### Description
The Web Inventory Talk is a collection of the original Ted talks and their translated version. The translations are available in more than 109+ languages, though the distribution is not uniform.
### Homepage
https://github.com/huggingface/datasets/blob/master/datasets/ted_talks_iwslt/README.md
### Licensing
- open license
- cc-by-nc-4.0: Creative Commons Attribution Non Commercial 4.0 International
TED makes its collection of video recordings and transcripts of talks available under the Creative Commons BY-NC-ND license (look here). WIT3 acknowledges the authorship of TED talks (BY condition) and does not redistribute transcripts for commercial purposes (NC). As regards the integrity of the work (ND), WIT3 only changes the format of the container, while preserving the original contents. WIT3 aims to support research on human language processing as well as the diffusion of TED Talks!
### Speaker Locations
- Southern Europe
- Italy
### Sizes
- 0.0305 % of total
- 0.0736 % of ar
- 0.2002 % of pt
- 0.0128 % of zh
- 0.2236 % of vi
- 0.0330 % of fr
- 0.0545 % of es
- 0.0122 % of en
- 0.3704 % of id
- 0.0373 % of indic-hi
- 0.0330 % of indic-ta
- 0.1393 % of indic-mr
- 0.0305 % of ca
- 0.1179 % of indic-ur
- 0.0147 % of indic-bn
- 0.0240 % of indic-ml
- 0.0244 % of indic-te
- 0.0503 % of indic-gu
- 0.0211 % of indic-kn
- 0.0274 % of eu
- 0.0023 % of indic-as
- 0.0001 % of indic-pa
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ca
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ur
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-as
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-pa
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
bigscience-data/roots_zh_wikiversity | 2022-12-12T11:17:25.000Z | [
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 2 | 3 | ---
language: zh
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh_wikiversity
# wikiversity_filtered
- Dataset uid: `wikiversity_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0367 % of total
- 0.1050 % of en
- 0.1178 % of fr
- 0.1231 % of pt
- 0.0072 % of zh
- 0.0393 % of es
- 0.0076 % of ar
- 0.0069 % of indic-hi
### BigScience processing steps
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_fr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
|
bigscience-data/roots_zh_wikinews | 2022-12-12T11:17:30.000Z | [
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 2 | 3 | ---
language: zh
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh_wikinews
# wikinews_filtered
- Dataset uid: `wikinews_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0307 % of total
- 0.0701 % of ar
- 0.3036 % of pt
- 0.0271 % of en
- 0.0405 % of fr
- 0.2119 % of indic-ta
- 0.0081 % of zh
- 0.0510 % of es
- 0.0725 % of ca
### BigScience processing steps
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
|
bigscience-data/roots_zh_wikiquote | 2022-12-12T11:17:35.000Z | [
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 1 | 3 | ---
language: zh
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh_wikiquote
# wikiquote_filtered
- Dataset uid: `wikiquote_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0462 % of total
- 0.1697 % of en
- 0.0326 % of fr
- 0.0216 % of ar
- 0.0066 % of zh
- 0.0833 % of pt
- 0.0357 % of es
- 0.0783 % of indic-ta
- 0.0361 % of indic-hi
- 0.0518 % of ca
- 0.0405 % of vi
- 0.0834 % of indic-ml
- 0.0542 % of indic-te
- 0.1172 % of indic-gu
- 0.0634 % of indic-kn
- 0.0539 % of id
- 0.0454 % of indic-ur
- 0.0337 % of indic-mr
- 0.0347 % of eu
### BigScience processing steps
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_fr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- filter_small_docs_bytes_300
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_vi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ml
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-te
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-gu
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-kn
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_id
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-mr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_eu
- dedup_template_soft
- replace_newline_with_space
|
strombergnlp/nlpcc-stance | 2022-10-25T21:47:26.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:cc-by-4.0",
"stance-detection",
"region:us"
] | strombergnlp | This is a stance prediction dataset in Chinese.
The data is that from a shared task, stance detection in Chinese microblogs, in NLPCC-ICCPOL 2016. It covers Task A, a mandatory supervised task which detects stance towards five targets of interest with given labeled data. | @incollection{xu2016overview,
title={Overview of nlpcc shared task 4: Stance detection in chinese microblogs},
author={Xu, Ruifeng and Zhou, Yu and Wu, Dongyin and Gui, Lin and Du, Jiachen and Xue, Yun},
booktitle={Natural language understanding and intelligent applications},
pages={907--916},
year={2016},
publisher={Springer}
} | null | 4 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- zh
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-analysis
pretty_name: NLPCC Stance
tags:
- stance-detection
---
# Dataset Card for "NLPCC 2016: Stance Detection in Chinese Microblogs"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://tcci.ccf.org.cn/conference/2016/pages/page05_evadata.html](http://tcci.ccf.org.cn/conference/2016/pages/page05_evadata.html)
- **Repository:**
- **Paper:** [https://link.springer.com/chapter/10.1007/978-3-319-50496-4_85](https://link.springer.com/chapter/10.1007/978-3-319-50496-4_85)
- **Point of Contact:** [Mads Kongsback](https://github.com/mkonxd)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
This is a stance prediction dataset in Chinese.
The data is that from a shared task, stance detection in Chinese microblogs, in NLPCC-ICCPOL 2016. It covers Task A, a mandatory supervised task which detects stance towards five targets of interest with given labeled data.
Some instances of the dataset have been removed, as they were without label.
### Supported Tasks and Leaderboards
* Stance Detection in Chinese Microblogs
### Languages
Chinese, as spoken on the Weibo website (`bcp47:zh`)
## Dataset Structure
### Data Instances
Example instance:
```
{
'id': '0',
'target': 'IphoneSE',
'text': '3月31日,苹果iPhone SE正式开卖,然而这款小屏新机并未出现人们预想的疯抢局面。根据市场分析机构Localytics周一公布的数据,iPhone SE正式上市的这个周末,销量成绩并不算太好。',
'stance': 2
}
```
### Data Fields
* id: a `string` field with a unique id for the instance
* target: a `string` representing the target of the stance
* text: a `string` of the stance-bearing text
* stance: an `int` representing class label -- `0`: AGAINST; `1`: FAVOR; `2`: NONE.
### Data Splits
The training split has 2986 instances
## Dataset Creation
### Curation Rationale
The goal was to create a dataset of microblog text annotated for stance. Six stance targets were selected and data was collected from Sina Weibo for annotation.
### Source Data
#### Initial Data Collection and Normalization
Not specified
#### Who are the source language producers?
Sina Weibo users
### Annotations
#### Annotation process
The stance of each target-microblog pair is duplicated annotated by two students
individually. If these two students provide the same annotation, the stance of this
microblog-target pair is then labeled. If the different annotation is detected, the third
student will be assigned to annotate this pair. Their annotation results will be voted to
obtain the final label.
#### Who are the annotators?
Students in China
### Personal and Sensitive Information
No reflections
## Considerations for Using the Data
### Social Impact of Dataset
The data preserves social media utterances verbatim and so has obviated any right to be forgotten, though usernames and post IDs are not explicitly included in the data.
### Discussion of Biases
There'll be at least a temporal and regional bias to this data, as well as it only representing expressions of stance on six topics.
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@incollection{xu2016overview,
title={Overview of nlpcc shared task 4: Stance detection in chinese microblogs},
author={Xu, Ruifeng and Zhou, Yu and Wu, Dongyin and Gui, Lin and Du, Jiachen and Xue, Yun},
booktitle={Natural language understanding and intelligent applications},
pages={907--916},
year={2016},
publisher={Springer}
}
```
### Contributions
Added by [@mkonxd](https://github.com/mkonxd), [@leondz](https://github.com/leondz)
|
NLPC-UOM/Sinhala-English-Code-Mixed-Code-Switched-Dataset | 2022-09-22T14:15:53.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:hate-speech-detection",
"task_ids:language-identification",
"multilinguality:multilingual",
"language:si",
"language:en",
"license:mit",
"region:us"
] | NLPC-UOM | null | null | null | 0 | 3 | ---
annotations_creators: []
language_creators: []
language:
- si
- en
license:
- mit
multilinguality:
- multilingual
size_categories: []
source_datasets: []
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- hate-speech-detection
- humor-detection
- language-identification
- aspect-identification
---
# Sinhala-English-Code-Mixed-Code-Switched-Dataset
This dataset contains 10,000 comments that have been annotated at the sentence level for sentiment analysis, humor detection, hate speech detection, aspect identification, and language identification.
The following is the tag scheme.
* Sentiment - Positive, Negative, Neutral, Conflict
* Humor - Humorous, Non humorous
* Hate Speech - Hate-Inducing, Abusive, Not offensive
* Aspect - Network, Billing or Price, Package, Customer Service, Data, Service or product, None
* Language ID - Sinhala, English, Sin-Eng, Eng-Sin, Mixed, Named-Entity, Symbol
|
tomekkorbak/pile-chunk-toxicity-scored-3 | 2022-05-20T18:40:31.000Z | [
"region:us"
] | tomekkorbak | null | null | null | 0 | 3 | A chunk 3 of the Pile (2.2m documents) scored using the Perspective API (on May 18-20 2022) |
Aniemore/REPV-S | 2022-10-25T10:28:15.000Z | [
"task_categories:audio-classification",
"task_ids:audio-emotion-recognition",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ru",
"license:... | Aniemore | null | null | null | 2 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
- crowdsourced
language:
- ru
license:
- mit
multilinguality:
- monolingual
pretty_name: Russian Emotional Phonetic Voices Small
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- audio-classification
task_ids:
- audio-emotion-recognition
---
# Citations
```
@misc{Aniemore,
author = {Артем Аментес, Илья Лубенец, Никита Давидчук},
title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека},
year = {2022},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.com/aniemore/Aniemore}},
email = {hello@socialcode.ru}
}
``` |
silver/lccc | 2022-11-06T04:51:16.000Z | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:zh",
"license:mit",
"dialogue-response-retrieval",
"arxiv:2008.03946",
"... | silver | LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.
A rigorous data cleaning pipeline is designed to ensure the quality of the corpus.
This pipeline involves a set of rules and several classifier-based filters.
Noises such as offensive or sensitive words, special symbols, emojis,
grammatically incorrect sentences, and incoherent conversations are filtered. | @inproceedings{wang2020chinese,
title={A Large-Scale Chinese Short-Text Conversation Dataset},
author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
booktitle={NLPCC},
year={2020},
url={https://arxiv.org/abs/2008.03946}
} | null | 10 | 3 | ---
annotations_creators:
- other
language_creators:
- other
language:
- zh
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- conversational
task_ids:
- dialogue-generation
pretty_name: lccc
tags:
- dialogue-response-retrieval
---
# Dataset Card for lccc_large
## Table of Contents
- [Dataset Card for lccc_large](#dataset-card-for-lccc_large)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/thu-coai/CDial-GPT
- **Repository:** https://github.com/thu-coai/CDial-GPT
- **Paper:** https://arxiv.org/abs/2008.03946
### Dataset Summary
lccc: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large Chinese dialogue corpus originate from Chinese social medias. A rigorous data cleaning pipeline is designed to ensure the quality of the corpus. This pipeline involves a set of rules and several classifier-based filters. Noises such as offensive or sensitive words, special symbols, emojis, grammatically incorrect sentences, and incoherent conversations are filtered.
lccc是一套来自于中文社交媒体的对话数据,我们设计了一套严格的数据过滤流程来确保该数据集中对话数据的质量。 这一数据过滤流程中包括一系列手工规则以及若干基于机器学习算法所构建的分类器。 我们所过滤掉的噪声包括:脏字脏词、特殊字符、颜表情、语法不通的语句、上下文不相关的对话等。
### Supported Tasks and Leaderboards
- dialogue-generation: The dataset can be used to train a model for generating dialogue responses.
- response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model.
### Languages
LCCC is in Chinese
LCCC中的对话是中文的
## Dataset Structure
### Data Instances
["火锅 我 在 重庆 成都 吃 了 七八 顿 火锅", "哈哈哈哈 ! 那 我 的 嘴巴 可能 要 烂掉 !", "不会 的 就是 好 油腻"]
### Data Fields
Each line is a list of utterances that consist a dialogue.
Note that the LCCC dataset provided in our original Github page is in json format,
however, we are providing LCCC in jsonl format here.
### Data Splits
We do not provide the offical split for LCCC-large.
But we provide a split for LCCC-base:
|train|valid|test|
|:---:|:---:|:---:|
|6,820,506 | 20,000 | 10,000|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Please cite the following paper if you find this dataset useful:
```bibtex
@inproceedings{wang2020chinese,
title={A Large-Scale Chinese Short-Text Conversation Dataset},
author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
booktitle={NLPCC},
year={2020},
url={https://arxiv.org/abs/2008.03946}
}
```
|
GEM/squality | 2022-10-25T12:58:23.000Z | [
"task_categories:summarization",
"annotations_creators:crowd-sourced",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2205.11465",
"arxiv:2112.07637",
"arxiv:2104.05938",
"region:us"
] | GEM | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @article{wang2022squality,
title={{SQ}u{ALITY}: Building a Long-Document Summarization Dataset the Hard Way},
author={Wang, Alex and Pang, Richard Yuanzhe and Chen, Angelica and Phang, Jason and Bowman, Samuel R.},
journal={arXiv preprint 2205.11465},
year={2022}
} | null | 1 | 3 | ---
annotations_creators:
- crowd-sourced
language_creators:
- unknown
language:
- en
license:
- cc-by-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: squality
---
# Dataset Card for GEM/squality
## Dataset Description
- **Homepage:** https://github.com/nyu-mll/SQuALITY
- **Repository:** https://github.com/nyu-mll/SQuALITY/data
- **Paper:** https://arxiv.org/abs/2205.11465
- **Leaderboard:** N/A
- **Point of Contact:** Alex Wang
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/squality).
### Dataset Summary
SQuALITY (Summarization-format QUestion Answering with Long Input Texts, Yes!) is a summarization dataset that is:
* Abstractive
* Long-input: The input document are short stories between 3000--6000 words.
* Question-focused: Each story is associated with multiple question-summary pairs.
* Multi-reference: Each question is paired with 4 summaries.
* High-quality: The summaries are crowdsourced from skilled and trained writers.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/squality')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/squality).
#### website
[Github](https://github.com/nyu-mll/SQuALITY)
#### paper
[ArXiv](https://arxiv.org/abs/2205.11465)
#### authors
Alex Wang (NYU); Angelica Chen (NYU); Richard Yuanzhe Pang (NYU); Nitish Joshi (NYU); Samuel R. Bowman (NYU)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/nyu-mll/SQuALITY)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/nyu-mll/SQuALITY/data)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ArXiv](https://arxiv.org/abs/2205.11465)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@article{wang2022squality,
title={S{Q}u{ALITY}: Building a Long-Document Summarization Dataset the Hard Way},
author={Wang, Alex and Pang, Richard Yuanzhe and Chen, Angelica and Phang, Jason and Bowman, Samuel R.},
journal={arXiv preprint 2205.11465},
year={2022}
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Alex Wang
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
wangalexc@gmail.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
stories: 1930--1970 American English
summaries: modern American English
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
stories: 1930--1970 American science fiction writers (predominantly American men)
summaries: Upwork writers (college-educated, native-English) and NYU undergraduates (English-fluent college students)
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-4.0: Creative Commons Attribution 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
summarization research
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Given a question about a particular high-level aspect of a short story, provide a summary about that aspect in the story (e.g., plot, character relationships, setting, theme, etc.).
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
New York University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Alex Wang (NYU); Angelica Chen (NYU); Richard Yuanzhe Pang (NYU); Nitish Joshi (NYU); Samuel R. Bowman (NYU)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Eric and Wendy Schmidt; Apple; NSF
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Alex Wang (NYU)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
* metadata: Project Gutenberg ID, internal UID, Project Gutenberg license
* document: the story
* questions: a list where each element contains
* question text: the question
* question number: the order in which workers answered the question
* responses: a list where each element contains
* worker ID: anonymous
* internal UID
* response text: the response
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The dataset is arranged with responses grouped by question (for ease of multi-reference training and evaluation) and questions grouped by story (to avoid duplicating the story in the dataset)
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{"metadata": {"passage_id": "63833", "uid": "ea0017c487a245668698cf527019b2b6", "license": ""}, "document": "Story omitted for readability", "questions": [{"question_text": "What is the plot of the story?", "question_number": 1, "responses": [{"worker_id": "6", "uid": "0c27bef1b7b644ffba735fdb005f9529", "response_text": "Brevet Lieutenant Commander David Farragut Stryakalski III, AKA Strike, is charged with commanding a run-down and faulty vessel, the Aphrodite. Aphrodite was the brain-child of Harlan Hendricks, an engineer who ushered in new technology ten years back. All three of his creations failed spectacularly, resulting in death and a failed career. The Aphrodite was the only ship to survive, and she is now used for hauling mail back and forth between Venus and Mars.\nStrike and Cob, the Aphrodite\u2019s only executive to last more than six months, recount Strike\u2019s great failures and how he ended up here. He used to fly the Ganymede, but was removed after he left his position to rescue colonists who didn\u2019t need rescuing. Strike was no longer trustworthy in Admiral Gorman\u2019s eyes, so he banished him to the Aphrodite. \nThe circuit that caused the initial demise of Aphrodite was sealed off. After meeting some members of his crew, Strike orders a conference for all personnel and calls in an Engineering Officer, one I.V. Hendricks. \nAfter Lieutenant Ivy Hendricks arrives--not I.V.--Strike immediately insults her by degrading the ship\u2019s designer, Harlan Hendricks. As it turns out, Hendricks is his daughter, and she vows to prove him wrong and all those who doubted her father. \nDespite their initial conflict, Strike and Hendricks\u2019 relationship soon evolves from resentment to respect. During this time, Strike\u2019s confidence in the Aphrodite plummets as she suffers from mechanical issues. \nThe Aphrodite starts to heat up as they get closer to the sun. The refrigeration units could not handle the heat, causing discomfort among the crew. As they get closer, a radar contact reveals that two dreadnaughts, the Lachesis and the Atropos, are doing routine patrolling. Nothing to worry about, except the Atropos had Admiral Gorman on board, hated by Strike and Hendricks.\nStrike and Hendricks make a joke about Gorman falling into the sun. As the temperature steadily climbs, the crew members overheat and begin fighting, resulting in a black eye. A distress signal came through from the Lachesis: the Atropos, with Gorman on board, was tumbling into the sun. The Lachesis was attempting to rescue them with an unbreakable cord, but they too were being pulled in. \nHendricks had fixed the surge-circuit rheostat, the one her father designed, and claimed it could help them rescue the ships. After some tension, Strike agrees and they race down to the sun to pick up the drifting dreadnaughts. \nStrike puts Hendricks in charge, but soon the heat overtakes her, and she is unable to continue. Strike takes over, attaches the Aphrodite to the Lachesis with a cord, and turns on the surge-circuit. They blast themselves out of there, rescuing the two ships and Admiral Gorman at the same time. \nCob and Strike are awarded Spatial Cross awards, while Hendricks is promoted to an engineering position at the Bureau of Ships. The story ends with Cob and Strike flipping through the pages of an address book until they land on Canalopolis, Mars. \n"}, {"worker_id": "1", "uid": "04e79312dede4a0da5993101e55a796a", "response_text": "Strike joins the crew of the Aphrodite after he has made several poor decisions while he was the captain of another spaceship. He is essentially being punished by his boss, Gorman, and put somewhere where he can do little harm. His job is to deliver the mail from Venus to Mars, so it\u2019s pretty straightforward. \n\nWhen he meets the Officer of the Deck, Celia Graham, he immediately becomes uncomfortable. He does not like to work with women in space, although it\u2019s a pretty common occurrence. He holds a captain\u2019s meeting the first day on the job, and he waits to meet his Engineering Officer, I.V. Hendricks. He makes a rude comment about how the man is late for his first meeting, but actually, the female Ivy has already shown up. \n\nAfter meeting Ivy formally, he makes a comment about how the ship Aphrodite was built by an imbecile. Ivy immediately tells him that he\u2019s wrong, and she knows this because the designer of the ship was none other than her own father. \n\nHis first week as captain on the new ship goes very poorly. Several repairs need to be done to Aphrodite, they run behind schedule, and the new crew members have a tough time getting a handle on Aphrodite\u2019s intricacies. \n\nThe heat index in the ship begins to rise, and the crew members can no longer wear their uniforms without fainting. Suddenly a distress call comes in, and it\u2019s coming from the Atropos, a ship Captained by Gorman, and the Lachesis. The crew members hesitate to take the oldest and most outdated machinery on a rescue trip. Strike has been in trouble for refusing to follow commands before, and he knows it\u2019s a risky move. However, Ivy insists that she knows how to pilot the Aphrodite, and she can save the crew members on the Atropos and the Lachesis from death. They are quickly tumbling towards the sun, and they will perish if someone doesn\u2019t do something quickly. \n\nIvy takes control of the ship, and the heat on the Aphrodite continues to rise steadily. Eventually, she faints from pure heat exhaustion, and she tells Strike that he must take over. He does, and he manages to essentially lasso the other two ships, and with just the right amount of power, he pulls them back into orbit. \n\nAt a bar, after the whole ordeal, Cob pokes fun at Strike for staying on the Aphrodite. He then admits that he actually respects Strike\u2019s loyalty to the ship that saved his reputation. Cob asks about Strike\u2019s relationship with Ivy, but Strike tells him that she has taken her dad\u2019s former job, so she no longer works with him. Strike takes the moment to look up her info, presumably to restart the relationship. \n"}, {"worker_id": "5", "uid": "71efb8636b504f42a6989bb90e360186", "response_text": "The narrative follows commander Strike as he begins his command of the spaceship Aphrodite. Strike comes from a long line of military greats but himself is prone to poor professional decision making.\n\nAs he takes command, the mission is a simple mail run. However, in the course of their journey, they receive word of two ships in dire need of rescue. Strike and his engineering officer, Ivy Hendricks, decide to use the ships extremely risky surge-circuit to aid the ships.\n\nThe rescue is a success and the crew is hailed for its bravery in saving the doomed vessels. "}, {"worker_id": "3", "uid": "8aa46ba8bd2945c98babd7dd2d9ecc38", "response_text": "The story starts in a muddy swamp on Venus, where Strike, a Brevet Lieutenant Commander, is encountering his new ship, the Aphrodite, for the first time. Here on Venusport Base, he is introduced to the executive officer of the ship, a man who goes by Cob. Strike comes from a line of servicemen who were all well respected, but he himself has more of a reputation for causing trouble by saying the wrong things or deviating from mission plans. His reputation preceded him, as Cob had specific questions about some of these events. The Aphrodite was incredibly impressive when it was designed, but did not live up to its expectations. It had been refitted, and the new mission that Strike was to lead was a mail run between Venus and Mars. As he entered the ship, Strike began to meet his new crew, including Celia Graham, his Radar Officer. Strike is not used to women being on ships and is decidedly uncomfortable with the idea. As he is briefing the officers who were already present, Strike is surprised when he meets his new engineering officer, Ivy Hendricks. Ivy is the daughter of the man who designed the ship, and she is cold to Strike at first, as he is to her. However, her expertise in engineering generally, the ship specifically, and other skills as well as piloting, meant that Strike warmed up to her as their mission went on. As the ship was flying towards Mars on their route, the crew picked up a distress signal from the Lachesis, which was trying to pull the Atropos away from the gravitational pull of the sun after it was damaged in an equipment malfunction. The Admiral who had put Strike in charge of the Aphrodite was on the Atropos, and Ivy dislikes him even more than Strike does, but they know they have to try to save the crews. Strike is hesitant, but Ivy has a plan and insists that they try. She has spent all of her free time tinkering with the circuits, and takes charge. She turned the Aphrodite towards the ships in danger, and sends out a cable to connect the Aphrodite to those ships. After they are all connected, the ships continue to spin towards the sun, which causes Ivy to pass out, leaving Strike in charge. He manages to pull the ships into line and send the Aphrodite in the right direction before passing out himself. The Aphrodite has the power to pull everyone away from the Sun\u2019s gravity, but the acceleration knocks everyone out on all three ships. In the end, it was a successful rescue mission of multiple crews. Strike and Cob find themselves in an officer\u2019s club at the end of the story, discussing Ivy\u2019s new job, and Strike acknowledges that Cob is right about the Aphrodite having grown on him, and plans to stay its captain."}]}, {"question_text": "Who is Ivy Hendricks and what happens to her throughout the story?", "question_number": 2, "responses": [{"worker_id": "6", "uid": "0c27bef1b7b644ffba735fdb005f9529", "response_text": "Lieutenant Ivy Hendricks is the daughter of Harlan Hendricks, a formerly respected engineer. He created the surge-circuit, an innovation in interstellar astrogation, and he was awarded a Legion of Merit. He designed three famous ships: the Artemis, the Andromeda, and the Aphrodite, the prototype. Despite being hailed as the latest and greatest in technology, all three ships either exploded or failed. \nAccording to Lieutenant Ivy Hendricks, their failures were due to the lack of education on board. She claimed that her father asked for the crew members to be trained in surge-circuit technology, so they could use it properly and correctly. That wish was not granted and after all three ships failed, his reputation and career were doomed. Admiral Gorman pulled the plug on his career and therefore became the target of all Lieutenant Hendricks\u2019 hate. \nWith a bone to pick, Lieutenant Hendricks, a knowledgeable engineer herself, comes aboard the Aphrodite to serve as her engineer and occasional pilot. She wants to prove to the world that her father\u2019s creation was genius and deserving of praise. \nAlthough they started off on the wrong foot, Lieutenant Hendricks and Strike, her commander, develop a friendship and appreciation for each other. They bond over their deep hatred of Admiral Gorman and the joy of piloting a ship. She soon proves herself to Strike, and he begins to trust her. Their relationship walks the fine line between friendship and romance. \nAs the Aphrodite is attempting to rescue the fallen dreadnaughts, Lieutenant Hendricks comes up with the solution. Due to her constant tinkering on the ship, she had fixed the surge-circuit rheostat and made it ready to use. Initially, no one trusts her, seeing as the last time it was used people died. But Strike\u2019s trust in her is strong and true, so he approves the use of the surge-circuit. Hendricks pilots the ship, but soon becomes too overheated and comes close to fainting. Strike takes over piloting and eventually activates the surge-circuit. It works and they are able to rescue the two ships, one of which had Admiral Gorman, her sworn enemy, onboard. \nLieutenant Hendricks receives a major promotion; she is now an engineer at the Bureau of Ships. She proved them wrong, and restored her father\u2019s legacy and good name. The story ends with their romance left in the air, but Hendricks has much to be proud of. \n"}, {"worker_id": "1", "uid": "04e79312dede4a0da5993101e55a796a", "response_text": "\nLieutenant Ivy Hendricks is the new Engineering Officer on Aphrodite. Strike and Cob assume that Ivy is a man before she arrives because they are sexist and because her name is listed as I.V. in the orders. Ivy is actually the daughter of the man who designed the award-winning craft.\n\nShe is cold and unfriendly towards Strike after she meets him, and that\u2019s probably because he makes a rude comment about the ship which her father created. After a couple weeks of working together, the two begin to get along very well. Strike admires Ivy\u2019s piloting skills and her depth of knowledge about the Aphrodite. \n\nThe two also bond over their shared hatred of Strike\u2019s former boss, Gorman. Strike feels as though he has ruined his career, and Ivy thinks that Gorman torpedoed her father\u2019s career. Ivy wants nothing more than to prove that Gorman is an idiot. \n\nHowever, when Gorman\u2019s ship is hurtling towards the sun and he and his crew members are about to die, Ivy sees that it\u2019s the perfect opportunity to show Gorman just how wrong he was about the ship her father designed. It\u2019s a very dangerous mission, but Ivy is steadfast in her decision and she\u2019s deeply courageous. She pilots the ship for most of the rescue mission, but eventually faints from the extreme heat. She tells Strike that he needs to take over, and he does a great job. \n\nIvy is then promoted, and she moves to Canalopolis, Mars. She now outranks her former Captain, Strike. \n"}, {"worker_id": "5", "uid": "71efb8636b504f42a6989bb90e360186", "response_text": "Ivy Hendricks is the engineering officer assigned to the Aphrodite. She is the daughter of Harlan Hendricks, the ship's original designer. She is fiercely protective of her father's legacy and resents Admiral Gorman for the way he treated him.\n\nHendricks and Strike, form an alliance of sorts after his initial surprise of seeing a woman assigned to this officer's role. When news arrives that two ships are in danger of falling into the sun, Ivy lobbies to use her father's technology to save the ship. Strike agrees to her plan although the risks are high. The Aphrodite eventually saves the ships although Ivy faints in the process from the heat and command has to be taken over by Strike.\n\nThe successful mission results in a promotion for Ivy as she works as a designer in the Bureau of Ships like her father."}, {"worker_id": "3", "uid": "8aa46ba8bd2945c98babd7dd2d9ecc38", "response_text": "Ivy Hendricks is the new engineering officer on the Aphrodite, having been transferred from the Antigone. She is a tall woman with dark hair and contrasting pale blue eyes, who has a very wide range of experience in ship operations and engineering. Her father, Harlan Hendricks, was the man who designed the Aphrodite, so she knows the ship needs a lot of specific training. At first, the captain did not expect her to be a woman, and managed to imply that many people found her father incompetent. Although she seemed cold at first, as she reacted to the situation, she and the captain eventually got along fairly well, as he learned to appreciate her wide skill set that ranged from engineering to piloting. Ivy and Strike also had a common enemy in the higher ranks: Space Admiral Gorman. Once Spike trusted her he appreciated that Ivy spent a lot of spare time working on the old circuits, so she knew the ship like the back of her hand. When the Aphrodite found the Lachesis and the Atropos when following up on a distress signal, Ivy new the ship well enough to be able to formulate a plan to save everyone. She piloted the Aphrodite carefully, using cables shot with a rocket to connect the three ships together, but the spinning of the ships in the heat inside meant that she passed out and had to leave Strike to take over for her. Her plan was successful; she was promoted, and instead of returning to the Aphrodite she started a design job with the Bureau of Ships."}]}, {"question_text": "What is the relationship between Strike and Aphrodite?", "question_number": 3, "responses": [{"worker_id": "6", "uid": "0c27bef1b7b644ffba735fdb005f9529", "response_text": "Strike is a member of a famous, well-behaved, and well-trained service family. His father and grandfather served in World War II and the Atomic War, respectively. Both earned medals for their heroic service. Strike, however, did not follow in his family\u2019s footsteps. \n\tWith a tendency to say the wrong thing at the wrong time, Strike often offended those around him and garnered a negative reputation. After being put in charge of the Ganymede, he soon lost his position after abandoning his station to rescue colonists who were not in danger. As well, he accused a Martian Ambassador of being a spy at a respectable ball. Admiral Gorman soon demoted him, and he became the commander of the Aphrodite. \n\tAt first, Strike was not a fan. He sees her as ugly, fat, and cantankerous. He misses the Ganymede, a shiny and new rocketship, and views the Aphrodite as less-than. \n\tWithin the first week of flying her, the Aphrodite had a burned steering tube, which made it necessary to go into free-fall as the damage control party made repairs. Strike\u2019s faith in Lover-Girl continued to plummet. \n\tHowever, after Lieutenant Hendricks, the resident engineer, got her hands on the Aphrodite, Strike\u2019s opinion started to change. Her knowledge of the ship, engineering, and piloting helped him gain confidence in both her abilities and those of Aphrodite.\nNear the end of the story, the Aphrodite is tasked with rescuing two ships that are falling into the sun. Previously Lieutenant Hendricks had fixed up the surge-circuit rheostat, and so she offered it up as the only solution. Strike agrees to try it, which shows his faith and trust in the Aphrodite. Luckily, all things go to plan, and the Aphrodite, with Strike piloting, is able to save the two ships and Admiral Gorman. \nAfter Strike won a medal himself, finally following in the family footsteps, he is offered his old position back on the Ganymede. He refuses, and instead returns to old Lover-Girl. He has grown fond of her over the course of their adventure, and they develop a partnership. "}, {"worker_id": "1", "uid": "04e79312dede4a0da5993101e55a796a", "response_text": "Strike is completely unimpressed by the rocket ship Aphrodite. He comments that she looks like a pregnant carp, and he knows that he\u2019s been assigned captain of the ship because he messed up terribly on his other missions. \n\nAphrodite was built 10 years ago, and now she is completely outdated and a laughing stock compared to the other spaceships in the fleet. She was designed by Harlan Hendricks, and the engineer received a Legion of Merit award for her design. \n\nStrike\u2019s mission is to fly Aphrodite to take the mail from Venusport to Canalopolis, Mars. It\u2019s boring and straightforward.\n\nWhen a disaster occurs and two other ships, the Atropos and the Lachesis, are in serious danger of getting too close to the sun, Strike agrees to take the old girl on a rescue mission. He is convinced by Ivy, since she knows the ship better than anyone else and she believes in her. \n\nAlthough Ivy takes Aphrodite most of the way there, its Strike who finishes the mission and saves his former boss, Gorman, and many other people from certain death. Aphrodite is the entire reason that Strike is able to mend his terrible reputation and he wins back respect from Gorman. Although they got off to a rocky start, Strike finds it impossible to leave his best girl, even when he is offered a job on another ship. He is loyal to the ship that made him a hero. \n"}, {"worker_id": "5", "uid": "71efb8636b504f42a6989bb90e360186", "response_text": "Strike is assigned to be commander of the spaceship Aphrodite. The ship is assigned as a mail carrier for the inner part of the solar system. The Aphrodite is a dilapidated design with an awful reputation. Strike ended up with the Aphrodite as a result of a series of poor professional decisions that resulted in him getting command of the more prestigious ship Ganymede taken away from him.\n\nHis initial impression of the Aphrodite softens to a grudging respect after the successful mission to save the Atropos and Lachesis. Although he presumably is in line to command the Ganymede again, another faux pas resulting in Strike continuing to command the Aphrodite. "}, {"worker_id": "3", "uid": "8aa46ba8bd2945c98babd7dd2d9ecc38", "response_text": "At the beginning of the story, Strike is very reluctant to accept Aphrodite, because being in charge of the ship means a demotion for him. His perception of the ship at the beginning of the story is colored by this history, and his first impression of the ship is not a positive one, even from the outside. Besides the actual construction of the ship, the technology that ran it was not something he showed much faith in. The first week that he was in charge after leaving Venus, it seemed things were going drastically wrong. When one important piece of equipment burnt out, the ship went into freefall, requiring a lot of repair work from the engineers, and anyone in charge of navigation was handed more work because of this as well. The ship was really put to the test when the Aphrodite responded to the distress call from the Lachesis, whose crew was trying to keep the Atropos from falling into the sun. Because Ivy knew the Aphrodite so well, and had been working on the circuits, it turned out the Aphrodite was the perfect ship to save the day. She could not see the rescue all the way through to the end, because she passed out early, but Strike was conscious a little bit longer and took over until he also passed out. After this unexpected rescue mission, Cob, the Executive Officer, noted that Strike has a newfound appreciation for the ship, and has no intention of leaving. Strike is dedicated to his new mission, even though at the beginning of the story he wanted nothing more than to pilot something the same rank as his old ship."}]}, {"question_text": "Describe the setting of the story.", "question_number": 4, "responses": [{"worker_id": "6", "uid": "0c27bef1b7b644ffba735fdb005f9529", "response_text": "Jinx Ship to the Rescue by Alfred Coppel, Jr. takes place in space, but more specifically in the Aphrodite. \n\tIt starts in the muddy Venusport Base on Venus. Venusport is famous for its warm, slimy, and green rain that falls for 480 hours of every day. A fog rolls in and degrades visibility. \n\tDespite starting on Venusport Base, the characters actually spend most of their time onboard the Aphrodite, a Tellurian Rocket Ship. The Aphrodite had a surge-circuit monitor of twenty guns built into her frame. She was bulky, fat, and ugly, and occasionally had some technical and mechanical struggles as well. \n\tAlthough her frame may not be appealing, she soon becomes victorious as she gains the trust of Strike and other members of his crew and saves two fallen dreadnaughts. With her surge-circuit rheostat rebuilt, the Aphrodite is finally able to accomplish what she was always meant to. "}, {"worker_id": "1", "uid": "04e79312dede4a0da5993101e55a796a", "response_text": "The story starts on the planet of Venus. Venus has days that are 720 hours long, and rain is common. The rain is hot, slimy, and green, and it makes the already wet swamplands even more mushy. Fog is common on Venus.\n\nThe middle of the story takes place on the old and outdated ship, Aphrodite. She gives the crew members a lot of trouble on their first mission. She is in dire need of repairs, she\u2019s slow, and it\u2019s impossible to control her temperature. The crew members are unable to wear their uniforms because the temperature is over 100 degrees. \n\nAphrodite\u2019s mission is simple. She needs to take the mail from Venus to Mars, and it\u2019s the only thing she can be trusted to do successfully. So it\u2019s very impressive when she ends up being the hero of the day and manages to rescue two other ships that are headed towards the sun. \n"}, {"worker_id": "5", "uid": "71efb8636b504f42a6989bb90e360186", "response_text": "The narrative is set in the early 21st century primarily aboard the spaceship Aphrodite. The ship's mission is to deliver mail in the inner part of the solar system.\n\nThe ships route takes them around the sun and as a result the ambient temperature inside the ship begins to rise to intolerable levels due to proximity to the sun. Because of the heat, the coed crew is allowed to operate with very little clothing. Aphrodite is a ship of an outdated design that gives it a lack of comfort and subjects it to numerous small problems that make its operation frustrating."}, {"worker_id": "3", "uid": "8aa46ba8bd2945c98babd7dd2d9ecc38", "response_text": "The story starts at a spaceport on Venus, where it has been raining for hundreds of hours straight. The rain has stopped by the time the story starts, but it is left a lot of mud in the swampy marshes. It was nearing the end of the day, and the fog was enveloping the surroundings as it grew darker outside. It was hot and sticky at Venusport Base, but after Strike left the service on his mission in the Aphrodite, it would only grow hotter on board. The ship itself, where most of the story takes place, is an older, refitted, bulky type of ship. There were only two others like it, and their designer had been awarded a Legion of Merit for the three. However, this is the only one still in use, as the others were destroyed in a much earlier mission. Strike\u2019s disappointment in the ship seems to mirror the sentiment. Inside the ship, there are many systems of pipes connected the control panels, and the captain had to navigate carefully so that he didn\u2019t hit his head on the bulkhead. While in space, as the ship flew closer and closer to the sun, the interior of the ship grew hotter and hotter. The crew opted to wear as little clothing as possible in an attempt to handle the heat. When the Aphrodite received the distress call from the Lachesis, the ships were close enough to the sun to be affected by its gravitational pull. After the close call near the sun, once everyone regained consciousness, the story ends at an officer\u2019s club on Mars. It was a formal environment, and the Aphrodite\u2019s captain and executive officer planned the rest of their route from there."}]}, {"question_text": "Who is Strike and what happens to him throughout the story?", "question_number": 5, "responses": [{"worker_id": "6", "uid": "0c27bef1b7b644ffba735fdb005f9529", "response_text": "Strike is a member of an esteemed service family on Venus; seven generations of well-behaved and well-trained operators. Unfortunately, Strike struggles to carry on the family tradition, and is known for misspeaking and offending those around him. By trusting his gut, he wound up failing his higher-ups and crew several times. All this culminated in an eventual mistrust of Strike, which led to him being charged with the Aphrodite. \n\tHis deep hatred of Space Admiral Gordon is passionate, but not without reason. Gordon is the one who demoted him to the Aphrodite. At the start, Strike is checking out his new vessel and notes how ugly the ship is. After examining the ship and it\u2019s crew, it is revealed that Strike is uncomfortable around women and believes they don\u2019t belong on a spaceship. \n\tIn order to start flying, he calls in an expert engineer to come aboard and travel with them. Thinking I.V. Hendricks is a man, he is excited to have them onboard. But when Ivy Hendricks shows up, a female engineer and the daughter of the Aphrodite\u2019s creator, his world is soon turned upside down. \n\tHis initial negative reaction to her is soon displaced by begrudging appreciation and eventually trust and friendship. Hendricks proves his previous theories about women wrong, and Strike is forced to accept that perhaps women do belong on a spaceship. She especially impresses him with her total knowledge of spaceship engineering and the Aphrodite in general. And it helped that she hated Admiral Gorman just as much as Strike, if not more. \n\tWhile flying by the sun to deliver mail, the Aphrodite receives a distress call from two ships: the Lachesis and the Atropos, the latter of which carried Admiral Gorman onboard. After the Aphrodite reached orbit, the Lachesis reached out and reported the Atropos was falling into the sun, due to a burst chamber. They couldn\u2019t move those onboard over thanks to all the radiation, so the Lachesis was attempting to pull the Atropos back using an unbreakable cord. But it wasn\u2019t enough. \n\tSince Ivy Hendricks had fixed the surge-circuit rheostat--the feature that crashed the original Aphrodite--, they were able to save the Lachesis and the Atropos and regain some of their dignity and former glory. \n\tStrike is awarded the Spatial Cross, as well as Cob, his friend and longtime executive of the Aphrodite. Strike was asked to return to the Ganymede, a beautiful sleek ship, but allegedly said the wrong thing to Gorman, and was instead sent back to the Aphrodite. Cob believes he did it on purpose, as Strike had grown quite fond of Lover-Girl. \n\tIvy has gone to the Bureau of Ships to engineer vessels, a great upgrade from her previous job. Cob pressures Strike to reach out to her, but he refuses. However, it ends on a hopeful note, with the potential for romance between Strike and Hendricks, and even more adventures on the clunky Aphrodite. "}, {"worker_id": "1", "uid": "04e79312dede4a0da5993101e55a796a", "response_text": "Strike\u2019s real name is Brevet Lieutenant Commander David Farragut Strykalski III. After serving on the Ganymede, he is put in charge of the Aphrodite. He comes from many generations of officers. However, he doesn\u2019t feel like he fits the mold of his grandfather and great-grandfather and so on. His boss, Gorman, disagreed with several decisions he made in the past and sent him to work on the Aphrodite, the unimpressive spaceship.\n\nStrike does not like working with women in space, so he is disappointed when two of his crew members are powerful and successful females. He learns his lesson after working with Ivy Hendricks for a few weeks. She impresses him with her piloting skills and her knowledge of the ship that her father designed. \n\nStrike is skeptical at first when Ivy wants to take Aphrodite to rescue two ships whose crew members are in grave danger. He knows that the mistakes he made before got him on the Aphrodite, and there\u2019s a big chance that he\u2019ll be fired for trying to save the day, or worse, the mission could end in death for him and all of his crew members. He has feelings for Ivy, and her intense passion convinces him that she\u2019s right, Aphrodite can handle the mission and they can save those peoples\u2019 lives.\n\nIvy pilots the ship almost the entire route, but she is unable to finish the job when she passes out from the intense heat. Captain Strike takes over and saves the crews on the Atropos and the Lachesis. He is hailed as a hero, and he repairs his terrible reputation with the selfless act. He decides not to leave the Aphrodite. He wants to be loyal to the ship that worked so hard for him. He does decide to give Ivy a call. Even though she outranks him, he has to admit that he has a crush on her. "}, {"worker_id": "5", "uid": "71efb8636b504f42a6989bb90e360186", "response_text": "Strike is the commander of the Aphrodite. He was originally the commander of the prestigious Ganymede. However a number of decisions made out of bravado as well as some unprofessional comments lost him that command.\n\nNow in command of a dilapidated ship, Strike comes to terms with his job. He commands a crew including a large number of women which makes him somewhat uncomfortable. His engineering officer Ivy Hendricks in particular seems to be of romantic interest to Strike.\n\nStrike ends up teaming with Ivy to save two ships from falling into the sun earning him a small promotion but an ill-advised comment prevents him from leaving the Aphrodite, perhaps to the satisfaction of Strike himself."}, {"worker_id": "3", "uid": "8aa46ba8bd2945c98babd7dd2d9ecc38", "response_text": "Strike is a highly decorated lieutenant commander in the Navy, who comes from a long line of ship operators. Although he has run many successful missions, he has a reputation of causing trouble\u2014his new Executive Officer, Cob, has heard a number of stories that he asks Strike for details about. Strike has lost command of the ship that he had been captaining, and is sent by Admiral Gorman to captain a mail route on the Aphrodite. He is extremely hesitant to have any positive feelings about the experience, from the ship itself, to the inclusion of women on its crew. Not only is this not the type of ship he is used to, he is never served with women on board. He has to navigate adapting to the new situation while adapting to the new job. Through the first week of his assignment, the ship and its crew grow on him. He comes to trust Ivy Hendricks, the Engineering Officer, and he lets her take charge to try to save the other ships when they respond to a distress call. Eventually, she passes out, and has to leave Strike in charge of getting the ships to safety. Eventually, Strike passes out just like everyone else, from the ship\u2019s acceleration to break the sun\u2019s gravity. At the end of the story, it is clear that his increased appreciation for the ship means he plans on staying, to the delight of his Executive Officer. Cob alludes to Strike having feelings for Ivy, but he says that although she is nice, he has no interest in being with a woman with a higher ranked title than he has. "}]}]}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
train, dev, test
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
Stories that appear in both SQuALITY and [QuALITY](https://github.com/nyu-mll/quality) are assigned to the same split in both datasets.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
The summaries in the dataset were crowdsourced, allowing us to use input documents that are easily understood by crowdworkers (as opposed to technical domains, such as scientific papers). Additionally, there is no lede bias in stories, as is typically in news articles used in benchmark summarization datasets like CNN/DM and XSum.
Additionally, the dataset is multi-reference and the references for each task are highly diverse. Having a diverse set of references better represents the set of acceptable summaries for an input, and opens the door for creative evaluation methodologies using these multiple references.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
The inputs (story-question pairs) are multi-reference. The questions are high-level and are written to draw from multiple parts of the story, instead of a single section of the story.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
* [original paper](https://arxiv.org/abs/2205.11465)
* [modeling question-focused summarization](https://arxiv.org/abs/2112.07637)
* [similar task format but different domain](https://arxiv.org/abs/2104.05938)
## Previous Results
### Previous Results
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`, `BERT-Score`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
Following norms in summarization, we have evaluated with automatic evaluation metrics like ROUGE and BERTScore, but these metrics do not correlate with human judgments of summary quality when comparing model summaries (see paper for details).
We highly recommend users of the benchmark use human evaluation as the primary method for evaluating systems. We present one example of such in the paper in which we ask Upwork workers to read the short story and then rate sets of three responses to each question. While this is close to the gold standard in how we would want to evaluate systems on this task, we recognize that finding workers who will read the whole story (~30m) is difficult and expensive, and doing efficient human evaluation for long document tasks is an open problem.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
Human evaluation
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
See paper (https://arxiv.org/abs/2205.11465)
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Crowdsourced`
#### Where was it crowdsourced?
<!-- info: If crowdsourced, where from? -->
<!-- scope: periscope -->
`Other crowdworker platform`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
Upwork: US-born, native English speakers with backgrounds in the humanities and copywriting
NYU undergraduates: English-fluent undergraduates from a diverse set of nationalities and majors
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The short stories are primarily science fiction and from the 1930s -- 1970s.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by crowdworker
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
crowd-sourced
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
11<n<50
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
English-fluent, with experience reading and writing about literature
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
4
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
4
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by another rater
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Each response was reviewed by three reviewers, who ranked the response (against two other responses), highlighted errors in the response, and provided feedback to the original response writer.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
Writers were informed that their writing and reviewing would be used in the development of AI.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
unlikely
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The stories in the dataset are from the 1930--1970s and may contain harmful stances on topics like race and gender. Models trained on the stories may reproduce these stances in their outputs.
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
The proposed automatic metrics for this dataset (ROUGE, BERTScore) are not sensitive to factual errors in summaries, and have been shown to not correlate well with human judgments of summary quality along a number of axes.
|
Lehrig/Monkey-Species-Collection | 2022-05-30T12:33:12.000Z | [
"region:us"
] | Lehrig | This dataset is intended as a test case for fine-grain classification tasks (10 different kinds of monkey species). The dataset consists of almost 1400 JPEG images grouped into two splits - training and validation. Each split contains 10 categories labeled as n0~n9, each corresponding a species from [Wikipedia's monkey cladogram](https://en.wikipedia.org/wiki/Monkey). Images were downloaded with help of the [googliser](https://github.com/teracow/googliser) open source code.
| Label | Latin Name | Common Name | Train Images | Validation Images |
| ----- | --------------------- | ------------------------- | ------------ | ----------------- |
| n0 | alouatta_palliata | mantled_howler | 131 | 26 |
| n1 | erythrocebus_patas | patas_monkey | 139 | 28 |
| n2 | cacajao_calvus | bald_uakari | 137 | 27 |
| n3 | macaca_fuscata | japanese_macaque | 152 | 30 |
| n4 | cebuella_pygmea | pygmy_marmoset | 131 | 26 |
| n5 | cebus_capucinus | white_headed_capuchin | 141 | 28 |
| n6 | mico_argentatus | silvery_marmoset | 132 | 26 |
| n7 | saimiri_sciureus | common_squirrel_monkey | 142 | 28 |
| n8 | aotus_nigriceps | black_headed_night_monkey | 133 | 27 |
| n9 | trachypithecus_johnii | nilgiri_langur | 132 | 26 |
This collection includes the following GTZAN variants:
* original (images are 400x300 px or larger; ~550 MB)
* downsized (images are downsized to 224x224 px; ~40 MB) | @misc{kaggle-10-monkey-species,
title={Kaggle: 10 Monkey Species},
howpublished={\\url{https://www.kaggle.com/datasets/slothkong/10-monkey-species}},
note = {Accessed: 2022-05-30},
} | null | 1 | 3 | annotations_creators:
- expert-generated
language_creators:
- expert-generated
languages: []
licenses:
- cc0-1.0
multilinguality: []
pretty_name: Monkey-Species-Collection
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
# Dataset Card for Monkey-Species-Collection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.kaggle.com/datasets/slothkong/10-monkey-species
- **Repository:** https://github.com/slothkong/CNN_classification_10_monkey_species
- **Paper:** @misc{kaggle-10-monkey-species,
title={Kaggle: 10 Monkey Species},
howpublished={\\url{https://www.kaggle.com/datasets/slothkong/10-monkey-species}},
note = {Accessed: 2022-05-30},
}
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is intended as a test case for fine-grain classification tasks (10 different kinds of monkey species). The dataset consists of almost 1400 JPEG images grouped into two splits - training and validation. Each split contains 10 categories labeled as n0~n9, each corresponding a species from [Wikipedia's monkey cladogram](https://en.wikipedia.org/wiki/Monkey). Images were downloaded with help of the [googliser](https://github.com/teracow/googliser) open source code.
| Label | Latin Name | Common Name | Train Images | Validation Images |
| ----- | --------------------- | ------------------------- | ------------ | ----------------- |
| n0 | alouatta_palliata | mantled_howler | 131 | 26 |
| n1 | erythrocebus_patas | patas_monkey | 139 | 28 |
| n2 | cacajao_calvus | bald_uakari | 137 | 27 |
| n3 | macaca_fuscata | japanese_macaque | 152 | 30 |
| n4 | cebuella_pygmea | pygmy_marmoset | 131 | 26 |
| n5 | cebus_capucinus | white_headed_capuchin | 141 | 28 |
| n6 | mico_argentatus | silvery_marmoset | 132 | 26 |
| n7 | saimiri_sciureus | common_squirrel_monkey | 142 | 28 |
| n8 | aotus_nigriceps | black_headed_night_monkey | 133 | 27 |
| n9 | trachypithecus_johnii | nilgiri_langur | 132 | 26 |
This collection includes the following GTZAN variants:
* original (images are 400x300 px or larger; ~550 MB)
* downsized (images are downsized to 224x224 px; ~40 MB)
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
lmqg/qg_squadshifts | 2022-12-02T18:56:15.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:subjqa",
"language:en",
"license:cc-by-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | lmqg | [SQuAD Shifts](https://modestyachts.github.io/squadshifts-website/index.html) dataset for question generation (QG) task. | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | null | 1 | 3 | ---
license: cc-by-4.0
pretty_name: SubjQA for question generation
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: subjqa
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_squadshifts"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
Modified version of [SQuADShifts](https://modestyachts.github.io/squadshifts-website/index.html) for question generation (QG) task.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset can be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"question": "has there ever been a legal challange?",
"paragraph": "The status of the Armenian Apostolic Church within the Republic of Armenia is defined in the country's constitution. Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church".",
"answer": "Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church",
"sentence": "Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church",
"paragraph_sentence": "The status of the Armenian Apostolic Church within the Republic of Armenia is defined in the country's constitution. <hl> Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church". <hl>",
"paragraph_answer": "The status of the Armenian Apostolic Church within the Republic of Armenia is defined in the country's constitution. Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." <hl> Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church". <hl>",
"sentence_answer": "Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." <hl> Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church". <hl>"
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
### Data Splits
| name |train | valid | test |
|-------------|------:|------:|-----:|
|default (all)|9209|6283 |18,844|
| amazon |3295|1648|4942|
| new_wiki |2646|1323|3969|
| nyt |3355|1678|5032|
| reddit |3268|1634|4901|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
yanekyuk/wikikey-fr | 2022-09-17T02:21:13.000Z | [
"language:fr",
"region:us"
] | yanekyuk | null | null | null | 0 | 3 | ---
language: fr
--- |
Adapting/empathetic_dialogues_v2 | 2022-06-21T17:56:26.000Z | [
"license:afl-3.0",
"region:us"
] | Adapting | null | null | null | 5 | 3 | ---
license: afl-3.0
---
Fine-tuned empathetic dialogue datasets from https://huggingface.co/datasets/empathetic_dialogues
With labeled chat history, system response, question or not and behavior.
|
AleDella/tone | 2022-08-10T09:08:36.000Z | [
"license:wtfpl",
"region:us"
] | AleDella | null | null | null | 0 | 3 | ---
license: wtfpl
---
This dataset is just a mini-dataset for a dead language bruh |
wise-east/spolin | 2022-10-25T10:29:16.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:text-scoring",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_c... | wise-east | null | null | null | 0 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
- other
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: spolin
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
- text-generation
task_ids:
- text-scoring
- dialogue-modeling
---
# SPOLIN
[![CC BY-NC 4.0][cc-by-nc-shield]][cc-by-nc]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Available SPOLIN Versions](#available_spolin_versions)
- [Relevant Links](#relevant-links)
- [Dataset Structure](#dataset-structure)
- [Dataset Statistics](#dataset-statistics)
- [Other Information](#other-information)
- [ACL Presentation](#acl-presentation)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
This is the repo for the paper ["Grounding Conversations with Improvised Dialogues"](https://aclanthology.org/2020.acl-main.218/) (ACL2020).
The _Selected Pairs of Learnable ImprovisatioN_ (SPOLIN) corpus is a collection of more than 68,000 "Yes, and" type dialogue pairs extracted from the Spontaneanation podcast by Paul F. Tompkins, the Cornell Movie-Dialogs Corpus, and the SubTle corpus. For more information, refer to our [paper](https://arxiv.org/abs/2004.09544) or our [project page](https://justin-cho.com/spolin).
### Available SPOLIN Versions:
The core dataset that was used for the experiments in the paper only includes _yes-ands_ and non-_yes-ands_ from Spontaneanation and most of what is provided in those extracted from the Cornell Movie-Dialogs Corpus. After the submitting the paper, we continued our iterative data augmentation process, repeating another iteration with the Cornell Movie-Dialogs Corpus and extracting from the SubTle corpus. This expanded version is also included in this repository [here](/data). This latest version of SPOLIN was used to train the model used in our [demo](https://spolin.isi.edu).
In the `data` folder, we provide two versions of the SPOLIN training set:
1. Version used for experiments in the ACL paper: `data/spolin-train-acl.csv`
2. Expanded version: `data/spolin-train.csv`
### Relevant Links:
* Project page: https://justin-cho.com/spolin
* Github repo: https://github.com/wise-east/spolin
* SpolinBot Demo: https://spolin.isi.edu
* ACL2020 Paper: https://aclanthology.org/2020.acl-main.218/
## Dataset Structure
**Fields**
* `id`: unique identifier
* `prompt`: first utterance in utterance pair
* `response`: second utterance in utterance pair
* `label`: yesand = 1, non-yesand = 0
* `source`: the source for the sample
* `split`: whether the sample belongs to the training set or the validation set
## Dataset Statistics
##### `spolin-train.csv`:
|| yesands| non-yesands|
|--|---:|---:|
|Spontaneanation|10,459|5,587*|
|Cornell|16,426|18,310|
|SubTle|40,303|19,512|
|Total|67,188|43,409|
##### `spolin-train-acl.csv`:
|| yesands| non-yesands|
|--|---:|---:|
|Spontaneanation|10,459|5,587*|
|Cornell|14,976|17,851|
|Total|25,435|23,438|
##### `spolin-valid.csv`:
|| yesands| non-yesands|
|--|---:|---:|
|Spontaneanation|500|500*|
|Cornell|500|500|
|Total|1,000|1,000|
\*Artificially collected by mix & matching positive Spontaneanation samples to balance dataset for training classifier
## Other Information
### ACL Presentation
[Video recording](https://slideslive.com/38928948/grounding-conversations-with-improvised-dialogues)
### Citation Information
If you use our data for your work, please cite our ACL2020 paper:
```
@inproceedings{cho2020spolin,
title={Grounding Conversations with Improvised Dialogues},
author={Cho, Hyundong and May, Jonathan},
booktitle ={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
publisher = {Association for Computational Linguistics},
location = {Seattle, Washington, USA},
year={2020}
}
```
### Licensing Information
This work is licensed under a [Creative Commons Attribution-NonCommercial 4.0 International License][cc-by-nc].
[![CC BY-NC 4.0][cc-by-nc-image]][cc-by-nc]
[cc-by-nc]: http://creativecommons.org/licenses/by-nc/4.0/
[cc-by-nc-image]: https://licensebuttons.net/l/by-nc/4.0/88x31.png
[cc-by-nc-shield]: https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg
|
gsarti/magpie | 2022-10-27T08:37:46.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"lice... | gsarti | The MAGPIE corpus is a large sense-annotated corpus of potentially idiomatic expressions (PIEs), based on the British National Corpus (BNC). Potentially idiomatic expressions are like idiomatic expressions, but the term also covers literal uses of idiomatic expressions, such as 'I leave work at the end of the day.' for the idiom 'at the end of the day'. This version of the dataset reflects the filtered subset used by Dankers et al. (2022) in their investigation on how PIEs are represented by NMT models. Authors use 37k samples annotated as fully figurative or literal, for 1482 idioms that contain nouns, numerals or adjectives that are colours (which they refer to as keywords). Because idioms show syntactic and morphological variability, the focus is mostly put on nouns. PIEs and their context are separated using the original corpus’s word-level annotations. | @inproceedings{haagsma-etal-2020-magpie,
title = "{MAGPIE}: A Large Corpus of Potentially Idiomatic Expressions",
author = "Haagsma, Hessel and
Bos, Johan and
Nissim, Malvina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.35",
pages = "279--287",
language = "English",
ISBN = "979-10-95546-34-4",
}
@inproceedings{dankers-etal-2022-transformer,
title = "Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation",
author = "Dankers, Verna and
Lucas, Christopher and
Titov, Ivan",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.252",
doi = "10.18653/v1/2022.acl-long.252",
pages = "3608--3626",
} | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
- text2text-generation
- translation
task_ids: []
pretty_name: magpie
tags:
- idiomaticity-classification
---
# Dataset Card for MAGPIE
## Table of Contents
- [Dataset Card for MAGPIE](#dataset-card-for-itacola)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Original Repository:** [hslh/magpie-corpus](https://github.com/hslh/magpie-corpus)
- **Other Repository:** [vernadankers/mt_idioms](https://github.com/vernadankers/mt_idioms)
- **Original Paper:** [ACL Anthology](https://aclanthology.org/2020.lrec-1.35/)
- **Other Paper:** [ACL Anthology](https://aclanthology.org/2022.acl-long.252/)
- **Point of Contact:** [Hessel Haagsma, Verna Dankers](vernadankers@gmail.com)
### Dataset Summary
The MAGPIE corpus ([Haagsma et al. 2020](https://aclanthology.org/2020.lrec-1.35/)) is a large sense-annotated corpus of potentially idiomatic expressions (PIEs), based on the British National Corpus (BNC). Potentially idiomatic expressions are like idiomatic expressions, but the term also covers literal uses of idiomatic expressions, such as 'I leave work at the end of the day.' for the idiom 'at the end of the day'. This version of the dataset reflects the filtered subset used by [Dankers et al. (2022)](https://aclanthology.org/2022.acl-long.252/) in their investigation on how PIEs are represented by NMT models. Authors use 37k samples annotated as fully figurative or literal, for 1482 idioms that contain nouns, numerals or adjectives that are colors (which they refer to as keywords). Because idioms show syntactic and morphological variability, the focus is mostly put on nouns. PIEs and their context are separated using the original corpus’s word-level annotations.
### Languages
The language data in MAGPIE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
The `magpie` configuration contains sentences with annotations for the presence, usage an type of potentially idiomatic expressions. An example from the `train` split of the `magpie` config (default) is provided below.
```json
{
'sentence': 'There seems to be a dearth of good small tools across the board.',
'annotation': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1],
'idiom': 'across the board',
'usage': 'figurative',
'variant': 'identical',
'pos_tags': ['ADV', 'VERB', 'PART', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN']
}
```
The text is provided as-is, without further preprocessing or tokenization.
The fields are the following:
- `sentence`: The sentence containing a PIE.
- `annotation`: List of 0s and 1s of the same length of the whitespace-tokenized sentence, with 1s corresponding to the position of the idiomatic expression.
- `idiom`: The idiom contained in the sentence in its base form.
- `usage`: Either `figurative` or `literal`, depending on the usage of the PIE.
- `variant`: `identical` if the PIE matches the base form of the idiom, otherwise specifies the variation.
- `pos_tags`: List of POS tags associated with words in the sentence.
### Data Splits
| config| train|
|----------:|-----:|
|`magpie` | 44451 |
### Dataset Creation
Please refer to the original article [MAGPIE: A Large Corpus of Potentially Idiomatic Expressions](https://aclanthology.org/2020.lrec-1.35) for additional information on dataset creation, and to the article [Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation](https://aclanthology.org/2022.acl-long.252) for further information on the filtering of selected idioms.
## Additional Information
### Dataset Curators
The original authors are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
The dataset is licensed under [Creative Commons 4.0 license (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
Please cite the authors if you use this corpus in your work:
```bibtex
@inproceedings{haagsma-etal-2020-magpie,
title = "{MAGPIE}: A Large Corpus of Potentially Idiomatic Expressions",
author = "Haagsma, Hessel and
Bos, Johan and
Nissim, Malvina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.35",
pages = "279--287",
language = "English",
ISBN = "979-10-95546-34-4",
}
@inproceedings{dankers-etal-2022-transformer,
title = "Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation",
author = "Dankers, Verna and
Lucas, Christopher and
Titov, Ivan",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.252",
doi = "10.18653/v1/2022.acl-long.252",
pages = "3608--3626",
}
```
|
rajistics/auditor_review | 2022-07-19T21:48:59.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-b... | rajistics | null | null | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
paperswithcode_id: null
pretty_name: Auditor_Review
---
# Dataset Card for financial_phrasebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
Auditor review data collected by News Department
- **Point of Contact:**
Talked to COE for Auditing
### Dataset Summary
Auditor sentiment dataset of sentences from financial news. The dataset consists of *** sentences from English language financial news categorized by sentiment. The dataset is divided by agreement rate of 5-8 annotators.
### Supported Tasks and Leaderboards
Sentiment Classification
### Languages
English
## Dataset Structure
### Data Instances
```
{ "sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .",
"label": "negative"
}
```
### Data Fields
- sentence: a tokenized line from the dataset
- label: a label corresponding to the class as a string: 'positive', 'negative' or 'neutral'
### Data Splits
A test train split was created randomly with a 75/25 split
## Dataset Creation
### Curation Rationale
The key arguments for the low utilization of statistical techniques in
financial sentiment analysis have been the difficulty of implementation for
practical applications and the lack of high quality training data for building
such models. ***
### Source Data
#### Initial Data Collection and Normalization
The corpus used in this paper is made out of English news on all listed
companies in ****
#### Who are the source language producers?
The source data was written by various auditors
### Annotations
#### Annotation process
This release of the financial phrase bank covers a collection of 4840
sentences. The selected collection of phrases was annotated by 16 people with
adequate background knowledge on financial markets.
Given the large number of overlapping annotations (5 to 8 annotations per
sentence), there are several ways to define a majority vote based gold
standard. To provide an objective comparison, we have formed 4 alternative
reference datasets based on the strength of majority agreement:
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
All annotators were from the same institution and so interannotator agreement
should be understood with this taken into account.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License: Creative Commons Attribution 4.0 International License (CC-BY)
### Contributions
|
vadis/sv-ident | 2022-11-07T20:51:06.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"languag... | vadis | The SV-Ident corpus (version 0.3) is a collection of 4,248 expert-annotated English
and German sentences from social science publications, supporting the task of
multi-label text classification. | @misc{sv-ident,
author={vadis-project},
title={SV-Ident},
year={2022},
url={https://github.com/vadis-project/sv-ident},
} | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
- de
license:
- mit
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
- semantic-similarity-classification
pretty_name: SV-Ident
paperswithcode_id: sv-ident
---
# Dataset Card for SV-Ident
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://vadis-project.github.io/sv-ident-sdp2022/
- **Repository:** https://github.com/vadis-project/sv-ident
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** svident2022@googlegroups.com
### Dataset Summary
SV-Ident comprises 4,248 sentences from social science publications in English and German. The data is the official data for the Shared Task: “Survey Variable Identification in Social Science Publications” (SV-Ident) 2022. Visit the homepage to find out more details about the shared task.
### Supported Tasks and Leaderboards
The dataset supports:
- **Variable Detection**: identifying whether a sentence contains a variable mention or not.
- **Variable Disambiguation**: identifying which variable from a given vocabulary is mentioned in a sentence. **NOTE**: for this task, you will need to also download the variable metadata from [here](https://bit.ly/3Nuvqdu).
### Languages
The text in the dataset is in English and German, as written by researchers. The domain of the texts is scientific publications in the social sciences.
## Dataset Structure
### Data Instances
```
{
"sentence": "Our point, however, is that so long as downward (favorable comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.",
"is_variable": 1,
"variable": ["exploredata-ZA5400_VarV66", "exploredata-ZA5400_VarV53"],
"research_data": ["ZA5400"],
"doc_id": "73106",
"uuid": "b9fbb80f-3492-4b42-b9d5-0254cc33ac10",
"lang": "en",
}
```
### Data Fields
The following data fields are provided for documents:
```
`sentence`: Textual instance, which may contain a variable mention.<br />
`is_variable`: Label, whether the textual instance contains a variable mention (1) or not (0). This column can be used for Task 1 (Variable Detection).<br />
`variable`: Variables (separated by a comma ";") that are mentioned in the textual instance. This column can be used for Task 2 (Variable Disambiguation). Variables with the "unk" tag could not be mapped to a unique variable.<br />
`research_data`: Research data IDs (separated by a ";") that are relevant for each instance (and in general for each "doc_id").<br />
`doc_id`: ID of the source document. Each document is written in one language (either English or German).<br />
`uuid`: Unique ID of the instance in uuid4 format.<br />
`lang`: Language of the sentence.
```
The language for each document can be found in the document-language mapping file [here](https://github.com/vadis-project/sv-ident/blob/main/data/train/document_languages.json), which maps `doc_id` to a language code (`en`, `de`). The variables metadata (i.e., the vocabulary) can be downloaded from this [link](https://bit.ly/3Nuvqdu). Note, that each `research_data` contains hundreds of variables (these can be understood as the corpus of documents to choose the most relevant from). If the variable has an "unk" tag, it means that the sentence contains a variable that has not been disambiguated. Such sentences could be used for Task 1 and filtered out for Task 2. The metadata file has the following format:
```
{
"research_data_id_1": {
"variable_id_1": VARIABLE_METADATA,
...
"variable_id_n": VARIABLE_METADATA,
},
...
"research_data_id_n": {...},
}
```
Each variable may contain all (or some) of the following values:
```
study_title: The title of the research data study.
variable_label: The label of the variable.
variable_name: The name of the variable.
question_text: The question of the variable in the original language.
question_text_en: The question of the variable in English.
sub_question: The sub-question of the variable.
item_categories: The item categories of the variable.
answer_categories: The answers of the variable.
topic: The topics of the variable in the original language.
topic_en: The topics of the variable in English.
```
### Data Splits
| Split | Number of sentences |
| ------------------- | ------------------------------------ |
| Train | 3,823 |
| Validation | 425 |
## Dataset Creation
### Curation Rationale
The dataset was curated by the VADIS project (https://vadis-project.github.io/).
The documents were annotated by two expert annotators.
### Source Data
#### Initial Data Collection and Normalization
The original data are available at GESIS (https://www.gesis.org/home) in an unprocessed format.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
The documents were annotated by two expert annotators.
### Personal and Sensitive Information
The dataset does not include personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
VADIS project (https://vadis-project.github.io/)
### Licensing Information
All documents originate from the Social Science Open Access Repository (SSOAR) and are licensed accordingly. The original document URLs are provided in [document_urls.json](https://github.com/vadis-project/sv-ident/blob/main/data/train/document_urlsjson). For more information on licensing, please refer to the terms and conditions on the [SSAOR Grant of Licenses page](https://www.gesis.org/en/ssoar/home/information/grant-of-licences).
### Citation Information
```
@inproceedings{tsereteli-etal-2022-overview,
title = "Overview of the {SV}-Ident 2022 Shared Task on Survey Variable Identification in Social Science Publications",
author = "Tsereteli, Tornike and
Kartal, Yavuz Selim and
Ponzetto, Simone Paolo and
Zielinski, Andrea and
Eckert, Kai and
Mayr, Philipp",
booktitle = "Proceedings of the Third Workshop on Scholarly Document Processing",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.sdp-1.29",
pages = "229--246",
abstract = "In this paper, we provide an overview of the SV-Ident shared task as part of the 3rd Workshop on Scholarly Document Processing (SDP) at COLING 2022. In the shared task, participants were provided with a sentence and a vocabulary of variables, and asked to identify which variables, if any, are mentioned in individual sentences from scholarly documents in full text. Two teams made a total of 9 submissions to the shared task leaderboard. While none of the teams improve on the baseline systems, we still draw insights from their submissions. Furthermore, we provide a detailed evaluation. Data and baselines for our shared task are freely available at \url{https://github.com/vadis-project/sv-ident}.",
}
```
### Contributions
[Needs More Information] |
bengaliAI/CommonVoiceBangla | 2022-07-01T00:46:28.000Z | [
"license:cc0-1.0",
"region:us"
] | bengaliAI | null | null | null | 4 | 3 | ---
license: cc0-1.0
---
How to load the Common Voice Bangla dataset directly with the datasets library
Run
1) from datasets import load_dataset
2) dataset = load_dataset("bengaliAI/CommonVoiceBangla", "bn", delimiter='\t')
|
BeIR/nfcorpus-generated-queries | 2022-10-23T06:12:19.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 3 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
BeIR/trec-covid-generated-queries | 2022-10-23T06:13:36.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 3 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
BeIR/webis-touche2020-generated-queries | 2022-10-23T06:14:11.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 3 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
valurank/Adult-content-dataset | 2023-01-19T02:40:10.000Z | [
"task_categories:text-classification",
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | valurank | null | null | null | 2 | 3 | ---
license:
- other
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for Adult_Content_Detection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
## Dataset Description
850 Articles descriptions classified into two different categories namely: Adult, and Non_Adult
## Languages
The text in the dataset is in English
## Dataset Structure
The dataset consists of two columns namely Description and Category.
The Description column consists of the overview of the article and the Category column consists of the class each article belongs to
## Source Data
The dataset is scrapped across different platforms
|
mounikaiiith/Telugu_Sentiment | 2022-07-04T15:05:31.000Z | [
"license:cc-by-4.0",
"region:us"
] | mounikaiiith | null | null | null | 1 | 3 | ---
license: cc-by-4.0
---
Do cite the below reference for using the dataset:
@article{marreddy2022resource, title={Am I a Resource-Poor Language? Data Sets, Embeddings, Models and Analysis for four different NLP tasks in Telugu Language},
author={Marreddy, Mounika and Oota, Subba Reddy and Vakada, Lakshmi Sireesha and Chinni, Venkata Charan and Mamidi, Radhika},
journal={Transactions on Asian and Low-Resource Language Information Processing}, publisher={ACM New York, NY} }
If you want to use the two classes (positive and negative) from the dataset, do cite the below reference:
@article{marreddy2022multi,
title={Multi-Task Text Classification using Graph Convolutional Networks for Large-Scale Low Resource Language},
author={Marreddy, Mounika and Oota, Subba Reddy and Vakada, Lakshmi Sireesha and Chinni, Venkata Charan and Mamidi, Radhika},
journal={arXiv preprint arXiv:2205.01204},
year={2022}
}
|
scikit-learn/student-alcohol-consumption | 2022-06-20T14:53:46.000Z | [
"license:cc0-1.0",
"region:us"
] | scikit-learn | null | null | null | 1 | 3 | ---
license: cc0-1.0
---
## Student Alcohol Consumption Dataset
A dataset on social, gender and study data from secondary school students.
Following was retrieved from [UCI machine learning repository](https://www.kaggle.com/datasets/uciml/student-alcohol-consumption).
**Context:**
The data were obtained in a survey of students math and portuguese language courses in secondary school. It contains a lot of interesting social, gender and study information about students. You can use it for some EDA or try to predict students final grade.
**Content:**
Attributes for both student-mat.csv (Math course) and student-por.csv (Portuguese language course) datasets:
- school - student's school (binary: 'GP' - Gabriel Pereira or 'MS' - Mousinho da Silveira)
- sex - student's sex (binary: 'F' - female or 'M' - male)
- age - student's age (numeric: from 15 to 22)
- address - student's home address type (binary: 'U' - urban or 'R' - rural)
- famsize - family size (binary: 'LE3' - less or equal to 3 or 'GT3' - greater than 3)
- Pstatus - parent's cohabitation status (binary: 'T' - living together or 'A' - apart)
- Medu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education)
- Fedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education)
- Mjob - mother's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
- Fjob - father's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
- reason - reason to choose this school (nominal: close to 'home', school 'reputation', 'course' preference or 'other')
- guardian - student's guardian (nominal: 'mother', 'father' or 'other')
- traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)
- studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)
- failures - number of past class failures (numeric: n if 1<=n<3, else 4)
- schoolsup - extra educational support (binary: yes or no)
- famsup - family educational support (binary: yes or no)
- paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no)
- activities - extra-curricular activities (binary: yes or no)
- nursery - attended nursery school (binary: yes or no)
- higher - wants to take higher education (binary: yes or no)
- internet - Internet access at home (binary: yes or no)
- romantic - with a romantic relationship (binary: yes or no)
- famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)
- freetime - free time after school (numeric: from 1 - very low to 5 - very high)
- goout - going out with friends (numeric: from 1 - very low to 5 - very high)
- Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high)
- Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high)
- health - current health status (numeric: from 1 - very bad to 5 - very good)
- absences - number of school absences (numeric: from 0 to 93)
These grades are related with the course subject, Math or Portuguese:
- G1 - first period grade (numeric: from 0 to 20)
- G2 - second period grade (numeric: from 0 to 20)
- G3 - final grade (numeric: from 0 to 20, output target)
**Additional note:** there are several (382) students that belong to both datasets.
These students can be identified by searching for identical attributes that characterize each student, as shown in the annexed R file. |
autoevaluate/mnist-sample | 2022-06-21T13:49:41.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 3 | Entry not found |
COLD-team/COLD | 2022-06-21T16:38:44.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | COLD-team | null | null | null | 0 | 3 | ---
license: cc-by-sa-4.0
---
## COLD: Complex Offensive Language Dataset
If you use this dataset, please cite the following paper (BibTex below):
Alexis Palmer, Christine Carr, Melissa Robinson, and Jordan Sanders. 2020 (to appear). COLD: Annotation scheme and evaluation data set for complex offensive language in English. *Journal of Linguistics and Computational Linguistics*.
## Overview of data
The COLD data set is intended for researchers to diagnose and assess their automatic hate speech detection systems. The corpus highlights 4 different types of complex offensive language: slurs, reclaimed slurs, adjective nominalization, distancing, and also non-offensive texts. The corpus contains a set of tweets collected from 3 different data sets: Davidson et al (2017), Waseem and Hovy (2016), and Robinson (2017). The data is annotated by 6 annotators, with each instance being annotated by at least 3 different annotators.
**COLD-2016** is the data set used for the analyses and experimental results described in the JLCL paper. This version of the data set contains 2016 instances, selected using filters aiming to capture the complex offensive language types listed above.
## Format and annotations
The data are made available here as .tsv files. The format consists of eight columns: four informational and four annotation-related.
### Informational columns:
1. **ID** - information about the original data set and the textual instance's ID from the data set it was extracted from. The ID includes a letter indicating which data set it originates from, followed by a hyphen and the corresponding ID of the instance in the original data set. For example: D-63 means that the instance is from the Davidson et al. (2017) data set, originally with the ID number 63.
2. **Dataset** - a letter indicating from which dataset this instance originates.
3. **Text** - the text of the instance.
### Majority Vote Columns:
For each instance, annotators were asked to answer Yes or No to each of four questions. Theses columns are the majority vote from three annotators (See the paper for much more detailed discussion, as well as distributions, etc.)
1. **Off** Is this text offensive?
2. **Slur** Is there a slur in the text?
3. **Nom** Is there an adjectival nominalization in the text?
4. **Dist** Is there (linguistic) distancing in the text?
### Individual Annotator Columns:
For each instance, annotators were asked to answer Yes or No to each of four questions. Theses columns are the individual response from each annotators (See the paper for much more detailed discussion, as well as distributions, etc.)
1. **Off1/2/3** Is this text offensive?
2. **Slur1/2/3** Is there a slur in the text?
3. **Nom1/2/3** Is there an adjectival nominalization in the text?
4. **Dist1/2/3** Is there (linguistic) distancing in the text?
### Category
1. **Cat** This column is deduced based on the majority votes for OFF/SLUR/NOM/DIST. (See the paper for detailed explination the categories, as well as distributions, etc.)
## Contact
If you have any questions please contact carrc9953@gmail.com, alexis.palmer@unt.edu, or melissa.robinson@my.unt.edu.
## BibTex
```
@article{cold:2020,
title = {COLD: Annotation scheme and evaluation data set for complex offensive language in English},
author = {Palmer, Alexis and Carr, Christine and Robinson, Melissa and Sanders, Jordan},
journal = {Journal of Linguistics and Computational Linguistics, Special Issue},
year = {2020},
volume={to appear},
number={to appear},
pages = {tbd}
}
```
## References
Davidson, T., Wamsley, D., Macy, M., & Weber, I. (2017). Automated hate speech detection and
the problem of offensive language. In Eleventh international conference on web and
social media. <a href="https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15665">[the paper]</a>, <a href="https://github.com/t-davidson/hate-speech-and-offensive-language">[the repository]</a>
Robinson, M. (2018). A man needs a female like a fish needs a lobotomy: The role of adjectival
nominalization in pejorative meaning. Master's thesis, Department of Linguistics, University of North Texas.
<a href="https://digital.library.unt.edu/ark:/67531/metadc1157617/m2/1/high_res_d/ROBINSON-THESIS-2018.pdf">[the thesis]</a>
Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for
Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop. San Diego, California.
<a href="https://www.aclweb.org/anthology/N16-2013/">[the paper]</a> |
fusing/dog_captions | 2022-06-22T14:28:13.000Z | [
"region:us"
] | fusing | null | null | null | 0 | 3 | Entry not found |
nateraw/lung-cancer | 2022-10-25T10:32:46.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | nateraw | null | null | null | 1 | 3 | ---
kaggle_id: nancyalaswad90/lung-cancer
license:
- cc-by-nc-sa-4.0
---
# Dataset Card for Lung Cancer
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/nancyalaswad90/lung-cancer
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The effectiveness of cancer prediction system helps the people to know their cancer risk with low cost and it also helps the people to take the appropriate decision based on their cancer risk status. The data is collected from the website online lung cancer prediction system .
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@nancyalaswad90](https://kaggle.com/nancyalaswad90)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
Nexdata/Chinese_Mandarin_Synthesis_Corpus-Female_Emotional | 2023-08-28T08:20:00.000Z | [
"region:us"
] | Nexdata | null | null | null | 0 | 3 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Chinese_Mandarin_Synthesis_Corpus-Female_Emotional
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1141?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
13.3 Hours - Chinese Mandarin Synthesis Corpus-Female, Emotional. It is recorded by Chinese native speaker,emotional text, and the syllables, phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1141?source=Huggingface
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
fever/feverous | 2022-10-25T05:50:36.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-sa-3.0",
"knowledge-verification",
"arxiv:2106.05707",
"region:us... | fever | null | null | null | 2 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- text-classification
task_ids: []
paperswithcode_id: feverous
pretty_name: FEVEROUS
tags:
- knowledge-verification
---
# Dataset Card for FEVEROUS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://fever.ai/dataset/feverous.html
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information](https://arxiv.org/abs/2106.05707)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
of recent research and media coverage: false information coming from unreliable sources.
The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured information) is a fact
verification dataset which consists of 87,026 verified claims. Each claim is annotated with evidence in the form of
sentences and/or cells from tables in Wikipedia, as well as a label indicating whether this evidence supports, refutes,
or does not provide enough information to reach a verdict. The dataset also contains annotation metadata such as
annotator actions (query keywords, clicks on page, time signatures), and the type of challenge each claim poses.
### Supported Tasks and Leaderboards
The task is verification of textual claims against textual sources.
When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
verification systems it is retrieved from a large set of documents in order to form the evidence.
### Languages
The dataset is in English (`en`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 187.82 MB
- **Size of the generated dataset:** 123.25 MB
- **Total amount of disk used:** 311.07 MB
An example of 'wikipedia_pages' looks as follows:
```
{'id': 24435,
'label': 1,
'claim': 'Michael Folivi competed with ten teams from 2016 to 2021, appearing in 54 games and making seven goals in total.',
'evidence': [{'content': ['Michael Folivi_cell_1_2_0',
'Michael Folivi_cell_1_7_0',
'Michael Folivi_cell_1_8_0',
'Michael Folivi_cell_1_9_0',
'Michael Folivi_cell_1_12_0'],
'context': [['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0']]},
{'content': ['Michael Folivi_cell_0_13_1',
'Michael Folivi_cell_0_14_1',
'Michael Folivi_cell_0_15_1',
'Michael Folivi_cell_0_16_1',
'Michael Folivi_cell_0_18_1'],
'context': [['Michael Folivi_title',
'Michael Folivi_header_cell_0_13_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_14_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_15_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_16_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_18_0',
'Michael Folivi_header_cell_0_11_0']]}],
'annotator_operations': [{'operation': 'start',
'value': 'start',
'time': 0.0},
{'operation': 'Now on', 'value': '?search=', 'time': 0.78},
{'operation': 'search', 'value': 'Michael Folivi', 'time': 78.101},
{'operation': 'Now on', 'value': 'Michael Folivi', 'time': 78.822},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_2_0',
'time': 96.202},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_7_0',
'time': 96.9},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_8_0',
'time': 97.429},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_9_0',
'time': 97.994},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_12_0',
'time': 99.02},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_13_1',
'time': 106.108},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_14_1',
'time': 106.702},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_15_1',
'time': 107.423},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_16_1',
'time': 108.186},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_17_1',
'time': 108.788},
{'operation': 'Highlighting',
'value': 'Michael Folivi_header_cell_0_17_0',
'time': 108.8},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_18_1',
'time': 109.469},
{'operation': 'Highlighting deleted',
'value': 'Michael Folivi_cell_0_17_1',
'time': 124.28},
{'operation': 'Highlighting deleted',
'value': 'Michael Folivi_header_cell_0_17_0',
'time': 124.293},
{'operation': 'finish', 'value': 'finish', 'time': 141.351}],
'expected_challenge': '',
'challenge': 'Numerical Reasoning'}
```
### Data Fields
The data fields are the same among all splits.
- `id` (int): ID of the sample.
- `label` (ClassLabel): Annotated label for the claim. Can be one of {"SUPPORTS", "REFUTES", "NOT ENOUGH INFO"}.
- `claim` (str): Text of the claim.
- `evidence` (list of dict): Evidence sets (at maximum three). Each set consists of dictionaries with two fields:
- `content` (list of str): List of element IDs serving as the evidence for the claim. Each element ID is in the format
`"[PAGE ID]_[EVIDENCE TYPE]_[NUMBER ID]"`, where `[EVIDENCE TYPE]` can be: `sentence`, `cell`, `header_cell`,
`table_caption`, `item`.
- `context` (list of list of str): List (for each element ID in `content`) of a list of Wikipedia elements that are
automatically associated with that element ID and serve as context. This includes an article's title, relevant
sections (the section and sub-section(s) the element is located in), and for cells the closest row and column
header (multiple row/column headers if they follow each other).
- `annotator_operations` (list of dict): List of operations an annotator used to find the evidence and reach a verdict,
given the claim. Each element in the list is a dictionary with the fields:
- `operation` (str): Operation name. Any of the following:
- `start`, `finish`: Annotation started/finished. The value is the name of the operation.
- `search`: Annotator used the Wikipedia search function. The value is the entered search term or the term selected
from the automatic suggestions. If the annotator did not select any of the suggestions but instead went into
advanced search, the term is prefixed with "contains...".
- `hyperlink`: Annotator clicked on a hyperlink in the page. The value is the anchor text of the hyperlink.
- `Now on`: The page the annotator has landed after a search or a hyperlink click. The value is the PAGE ID.
- `Page search`: Annotator search on a page. The value is the search term.
- `page-search-reset`: Annotator cleared the search box. The value is the name of the operation.
- `Highlighting`, `Highlighting deleted`: Annotator selected/unselected an element on the page. The value is
`ELEMENT ID`.
- `back-button-clicked`: Annotator pressed the back button. The value is the name of the operation.
- `value` (str): Value associated with the operation.
- `time` (float): Time in seconds from the start of the annotation.
- `expected_challenge` (str): The challenge the claim generator selected will be faced when verifying the claim, one
out of the following: `Numerical Reasoning`, `Multi-hop Reasoning`, `Entity Disambiguation`,
`Combining Tables and Text`, `Search terms not in claim`, `Other`.
- `challenge` (str): Main challenge to verify the claim, one out of the following: `Numerical Reasoning`,
`Multi-hop Reasoning`, `Entity Disambiguation`, `Combining Tables and Text`, `Search terms not in claim`, `Other`.
### Data Splits
| | train | validation | test |
|--------------------|------:|-----------:|-----:|
| Number of examples | 71291 | 7890 | 7845 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
```
These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Termsâ€). You may not use these files except in compliance with the applicable License Terms.
```
### Citation Information
If you use this dataset, please cite:
```bibtex
@inproceedings{Aly21Feverous,
author = {Aly, Rami and Guo, Zhijiang and Schlichtkrull, Michael Sejr and Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Cocarascu, Oana and Mittal, Arpit},
title = {{FEVEROUS}: Fact Extraction and {VERification} Over Unstructured and Structured information},
eprint={2106.05707},
archivePrefix={arXiv},
primaryClass={cs.CL},
year = {2021}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
LHF/escorpius | 2023-01-05T10:55:48.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:es",
"license:cc-by-nc-nd-4.0",
"arxiv:2206.15147",
"region:us"
] | LHF | Spanish dataset | @misc{TODO
} | null | 12 | 3 | ---
license: cc-by-nc-nd-4.0
language:
- es
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# esCorpius: A Massive Spanish Crawling Corpus
## Introduction
In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, the results in Spanish present important shortcomings, as they are either too small in comparison with other languages, or present a low quality derived from sub-optimal cleaning and deduplication. In this work, we introduce esCorpius, a Spanish crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in Spanish with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius has been released under CC BY-NC-ND 4.0 license.
## Statistics
| **Corpus** | OSCAR<br>22.01 | mC4 | CC-100 | ParaCrawl<br>v9 | esCorpius<br>(ours) |
|-------------------------|----------------|--------------|-----------------|-----------------|-------------------------|
| **Size (ES)** | 381.9 GB | 1,600.0 GB | 53.3 GB | 24.0 GB | 322.5 GB |
| **Docs (ES)** | 51M | 416M | - | - | 104M |
| **Words (ES)** | 42,829M | 433,000M | 9,374M | 4,374M | 50,773M |
| **Lang.<br>identifier** | fastText | CLD3 | fastText | CLD2 | CLD2 + fastText |
| **Elements** | Document | Document | Document | Sentence | Document and paragraph |
| **Parsing quality** | Medium | Low | Medium | High | High |
| **Cleaning quality** | Low | No cleaning | Low | High | High |
| **Deduplication** | No | No | No | Bicleaner | dLHF |
| **Language** | Multilingual | Multilingual | Multilingual | Multilingual | Spanish |
| **License** | CC-BY-4.0 | ODC-By-v1.0 | Common<br>Crawl | CC0 | CC-BY-NC-ND |
## Citation
Link to the paper: https://www.isca-speech.org/archive/pdfs/iberspeech_2022/gutierrezfandino22_iberspeech.pdf / https://arxiv.org/abs/2206.15147
Cite this work:
```
@inproceedings{gutierrezfandino22_iberspeech,
author={Asier Gutiérrez-Fandiño and David Pérez-Fernández and Jordi Armengol-Estapé and David Griol and Zoraida Callejas},
title={{esCorpius: A Massive Spanish Crawling Corpus}},
year=2022,
booktitle={Proc. IberSPEECH 2022},
pages={126--130},
doi={10.21437/IberSPEECH.2022-26}
}
```
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus. |
ElKulako/stocktwits-crypto | 2022-09-01T00:46:26.000Z | [
"region:us"
] | ElKulako | null | null | null | 8 | 3 | Dataset StockTwits-crypto contains all cryptocurrency-related posts from the StockTwits website, from 1st of November 2021 to the 15th of June 2022.
The data has been cleaned and preprocessed, we removed:
- cashtags, hashtags, usernames,
- URLs, crypto wallets,
- Chinese, Korean and Japanese characters,
- (most) UTF-8 encoding issues
- removed all posts shorter than 4 words
- removed all duplicate posts
- fixed spacing and punctuation issues, converted all text to lowercase |
projecte-aina/ca_zh_wikipedia | 2023-01-09T07:56:07.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ca",
"language:zh",
"language:multilingual",
"license:cc-by-4.0",
"region:us"
] | projecte-aina | The CA-ZH Parallel Corpus is a Catalan-Chinese dataset of mutual translations automatically crawled from Wikipedia. Two separate corpora are included, namely CA-ZH 1.05 Wikipedia and CA-ZH 1.10 Wikipedia, the latter has better general quality than the former. The dataset was created to support Catalan NLP tasks, e.g. Machine Translation. | \ | null | 3 | 3 | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- ca
- zh
- multilingual
license:
- cc-by-4.0
multilinguality:
- translation
pretty_name: CA-ZH Wikipedia Parallel Corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
# Dataset Card for CA-ZH Wikipedia datasets
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [cescolano3@gmail.com](cescolano3@gmail.com)
### Dataset Summary
The CA-ZH Parallel Corpus is a Catalan-Chinese dataset of mutual translations automatically crawled from Wikipedia. Two separate corpora are included, namely CA-ZH 1.05 Wikipedia and CA-ZH 1.10 Wikipedia, the latter has better general quality than the former. The dataset was created to support Catalan NLP tasks, e.g., Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score. The dataset can be used to finetune a large-scale multilingual MT system such as m2m-100.
### Languages
The texts in the dataset are in Catalan and Chinese.
## Dataset Structure
### Data Instances
A typical data point comprises a pair of translations in Catalan and Chinese. An example from the Ca-Zh Parallel Corpus looks as follows:
```
{ "ca": "1591è Batalló Separat d'Artilleria autorpopulsada", "zh": "第1591自走砲营" }
```
### Data Fields
- "ca": Text in Catalan.
- "zh": Text in Chinese.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Curation Rationale
The Ca-Zh Parallel Corpus was built to provide more language data for MT tasks dedicated to low-resource languages. The dataset was built by gathering texts on the same topic in Catalan and Chinese from Wikipedia.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by automatic crawling, a quality filter was applied to improve the data quality. The original Chinese data was mixed into Traditional Chinese and Simplified Chinese, a simplification process was conducted in order to guarantee the unification.
#### Who are the source language producers?
All the texts in this dataset come from the Wikipedia.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
Wikipedia provides data of a more general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Dataset Curators
Carlos Escolano, Chenuye Zhou and Zixuan Liu, Barcelona Supercomputing Center (cescolano3 at gmail dot com)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution Share Alike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@mastersthesis{MasterThesisChenuyeZhou,
author = "Chenuye Zhou",
title = "Building a Catalan-Chinese parallel corpus for use in MT",
school = "Universitat Pompeu Fabra",
year = 2022,
address = "Barcelona",
url = "https://repositori.upf.edu/handle/10230/54140"
}
@mastersthesis{MasterThesisZixuanLiu,
author = "Zixuan Liu",
title = "Improving Chinese-Catalan Machine Translation with Wikipedia Parallel",
school = "Universitat Pompeu Fabra",
year = 2022,
address = "Barcelona",
url= "https://repositori.upf.edu/handle/10230/54142"
}
```
|
IDEA-CCNL/AFQMC | 2023-04-06T06:32:35.000Z | [
"license:apache-2.0",
"arxiv:2209.02970",
"region:us"
] | IDEA-CCNL | Download from https://www.cluebenchmarks.com/introduce.html | \ | null | 5 | 3 | ---
license: apache-2.0
---
# AFQMC
Download from https://www.cluebenchmarks.com/introduce.html
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
|
neuralchen/VGGFace2-HQ | 2022-06-28T08:59:32.000Z | [
"license:apache-2.0",
"region:us"
] | neuralchen | null | null | null | 1 | 3 | ---
license: apache-2.0
---
|
imvladikon/nemo_corpus | 2023-01-04T12:03:22.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-reuters-corpus",
"language:he",
"license:other",
"region:us"
] | imvladikon | \ | @article{10.1162/tacl_a_00404,
author = {Bareket, Dan and Tsarfaty, Reut},
title = "{Neural Modeling for Named Entities and Morphology (NEMO2)}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {9},
pages = {909-928},
year = {2021},
month = {09},
abstract = "{Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities do not necessarily coincide with token boundaries, rather, they respect morphological boundaries. To address NER in MRLs we then need to answer two fundamental questions, namely, what are the basic units to be labeled, and how can these units be detected and classified in realistic settings (i.e., where no gold morphology is available). We empirically investigate these questions on a novel NER benchmark, with parallel token- level and morpheme-level NER annotations, which we develop for Modern Hebrew, a morphologically rich-and-ambiguous language. Our results show that explicitly modeling morphological boundaries leads to improved NER performance, and that a novel hybrid architecture, in which NER precedes and prunes morphological decomposition, greatly outperforms the standard pipeline, where morphological decomposition strictly precedes NER, setting a new performance bar for both Hebrew NER and Hebrew morphological decomposition tasks.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00404},
url = {https://doi.org/10.1162/tacl\_a\_00404},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00404/1962472/tacl\_a\_00404.pdf},
} | null | 0 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- he
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-reuters-corpus
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- config: bmc
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: validation
test_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# NEMO-Corpus - The Hebrew Named Entities and Morphology Corpus
**Disclaimer**: It's just a huggingface datasets convenient interface for research purpose which is fetching the original data from [github](https://github.com/OnlpLab/NEMO-Corpus). I'm not an author of this work.
```python
from datasets import load_dataset
# the main corpus
ds = load_dataset('imvladikon/nemo_corpus')
for sample in ds["train"]:
print(sample)
# the nested corpus
ds = load_dataset('imvladikon/nemo_corpus', "nested")
```
Getting classes and encoding/decoding could be done through these functions:
```
idx2label = dataset["train"].features["ner_tags"].feature.int2str
label2idx = dataset["train"].features["ner_tags"].feature.str2int
```
or just use raw_tags field.
## Fields
available fields (flat):
* "id"
* "sentence"
* "tokens"
* "raw_tags"
* "ner_tags"
* "spans"
Example of the one record for `flat`:
```json
{'id': '0', 'tokens': ['"', 'תהיה', 'נקמה', 'ו', 'בגדול', '.'], 'sentence': '" תהיה נקמה ו בגדול .', 'raw_tags': ['O', 'O', 'O', 'O', 'O', 'O'], 'ner_tags': [24, 24, 24, 24, 24, 24], 'spans': {'span': [], 'start': [], 'end': [], 'entity': [], 'start_char': [], 'end_char': []}}
```
Example of the one record for `nested`:
```json
{'id': '0', 'tokens': ['"', 'תהיה', 'נקמה', 'ו', 'בגדול', '.'], 'ner_tags': [24, 24, 24, 24, 24, 24], 'ner_tags_2': [24, 24, 24, 24, 24, 24], 'ner_tags_3': [24, 24, 24, 24, 24, 24], 'ner_tags_4': [24, 24, 24, 24, 24, 24]}
```
## Dataset Description
it's README.md of the [original repository](https://github.com/OnlpLab/NEMO-Corpus)
Named Entity (NER) annotations of the Hebrew Treebank (Haaretz newspaper) corpus, including: morpheme and token level NER labels, nested mentions, and more.
We publish the NEMO corpus in the TACL paper [*"Neural Modeling for Named Entities and Morphology (NEMO<sup>2</sup>)"*](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00404/107206/Neural-Modeling-for-Named-Entities-and-Morphology) [1], where we use it in extensive experiments and analyses, showing the importance of morphological boundaries for neural modeling of NER in morphologically rich languages. Code for these models and experiments can be found in the [NEMO code repo](https://github.com/OnlpLab/NEMO).
## Main features:
1. Morpheme, token-single and token-multi sequence labels. Morpheme labels provide exact boundaries, token-multi provide partial sub-word morphological but no exact boundaries, token-single provides only token-level information.
1. All annotations are in `BIOSE` format (`B`=Begin, `I`=Inside, `O`=Outside, `S`=Singleton, `E`=End).
1. Widely-used OntoNotes entity category set: `GPE` (geo-political entity), `PER` (person), `LOC` (location), `ORG` (organization), `FAC` (facility), `EVE` (event), `WOA` (work-of-art), `ANG` (language), `DUC` (product).
1. NEMO includes NER annotations for the two major versions of the Hebrew Treebank, UD (Universal Dependency) and SPMRL. These can be aligned to the other morphosyntactic information layers of the treebank using [bclm](https://github.com/OnlpLab/bclm)
1. We provide nested mentions. Only the first, widest, layer is used in the NEMO<sup>2</sup> paper. We invite you to take on this challenge!
1. Guidelines used for annotation are provided [here](./guidelines/).
1. Corpus was annotated by two native Hebrew speakers of academic education, and curated by the project manager. We provide the original annotations made by the annotators as well to promote work on [learning with disagreements](https://sites.google.com/view/semeval2021-task12/home).
1. Annotation was performed using [WebAnno](https://webanno.github.io/webanno/) (version 3.4.5)
## Legend for Files and Folder Structure
1. The two main [data](./data/) folders are [ud](./data/ud/) and [spmrl](./data/spmrl/), corresponding to the relevant Hebrew Treebank corpus version.
1. Both contain a `gold` folder ([spmrl/gold](./data/spmrl/gold/), [ud/gold](./data/ud/gold/)) of gold curated annotations.
1. Each `gold` folder contains files of the three input-output variants (morph, token-multi, token-single), for each of the treebank splits (train,dev,test).
1. Each `gold` folder also contains a `nested` subfolder ([spmrl/nested](./data/spmrl/gold/nested/), [ud/nested](./data/ud/gold/nested/)), which contains all layers of nested mentions (the first layer is the layer used in the non-nested files, and in the NEMO<sup>2</sup> paper [1])
1. The `ud` folder also contains an [ab_annotators](./data/ud/ab_annotators/) folder. This folder contains the original annotations made by each annotator (named `a`, `b`), including first-layer and nested annotatations.
1. *\*UPDATE 2021-09-06\** `ud` folder now contains a [pilot_annotations](./data/ud/pilot_annotations/) folder. This folder contains the original annotations made by each annotator in our two phase pilot (phase I - sentences 1-200 of dev; phase II - sentences 201-400 of dev).
## Basic Corpus Statistics
| | train | dev | test |
|------------------------------| --:| --:| --:|
| Sentences | 4,937 | 500 | 706 |
| Tokens | 93,504 | 8,531 | 12,619 |
| Morphemes | 127,031 | 11,301 | 16,828 |
| All mentions | 6,282 | 499 | 932 |
| Type: Person (PER) | 2,128 | 193 | 267 |
| Type: Organization (ORG) | 2,043 | 119 | 408 |
| Type: Geo-Political (GPE) | 1,377 | 121 | 195 |
| Type: Location (LOC) | 331 | 28 | 41 |
| Type: Facility (FAC) | 163 | 12 | 11 |
| Type: Work-of-Art (WOA) | 114 | 9 | 6 |
| Type: Event (EVE) | 57 | 12 | 0 |
| Type: Product (DUC) | 36 | 2 | 3 |
| Type: Language (ANG) | 33 | 3 | 1 |
## Aligned Treenbank Versions
The NEMO corpus matches the treebank version of [bclm v.1.0.0](https://github.com/OnlpLab/bclm/releases/tag/v1.0.0-alpha).
This version is based on the [HTB UD v2.2](https://github.com/UniversalDependencies/UD_Hebrew-HTB/releases/tag/r2.2) and the [latest SPMRL HTB version](https://github.com/OnlpLab/HebrewResources/tree/102674bb030f5836e1ab827feb63954ad7a6f8fe/HebrewTreebank/hebtb).
The changes contain (but might not be limited to the following):
1. Flagged and dropped duplicate and leaking sentences (between train and test). In addition to the sentences already removed in the bclm v1.0.0 HTB version, the following duplicate sentences were dropped as well (SPMRL sentence IDs): 5438, 5444, 5445, 5446, 5448, 5449, 5450, 5451, 5453, 5459 (in the bclm dataframes, these are marked in the `duplicate_sent_id` column).
To read the treebank (UD/SPMRL) in a way that matches the NEMO corpus, you can use the following:
```python
import bclm
dropped = [5438, 5444, 5445, 5446, 5448, 5449, 5450, 5451, 5453, 5459]
spdf = bclm.read_dataframe('spmrl') # load SPMRL treebank dataframe
global_dropped = [spdf[spdf.sent_id==d].global_sent_id.iat[0] for d in dropped]
uddf = bclm.read_dataframe('ud') # load UD treebank dataframe
uddf = uddf[(~uddf.global_sent_id.isin(global_dropped))] # remove extra duplicates
spdf = spdf[(~spdf.sent_id.isin(dropped))] # remove extra duplicates
# The resulting dataframes contain gold morph NER labels in the `biose_layer0`, `biose_layer1`... columns.
```
2. The UD treebank contains many more duplicates. In this version: all sentences exist in both UD and SPMRL versions, and all sentences and tokens are aligned between UD and SPMRL.
2. Fixed numbers that were originally reversed.
2. Fixed mismatches between tokens and morphemes.
2. Added Binyan feature.
2. No individual morphemes or tokens were added or removed, only complete sentences.
## Evaluation
An evaluation script is provided in the [NEMO code repo](https://github.com/OnlpLab/NEMO#evaluation) along with evaluation instructions.
## Citations
##### [1]
If you use the NEMO corpus in your research, please cite the NEMO<sup>2</sup> paper:
```bibtex
@article{10.1162/tacl_a_00404,
author = {Bareket, Dan and Tsarfaty, Reut},
title = "{Neural Modeling for Named Entities and Morphology (NEMO2)}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {9},
pages = {909-928},
year = {2021},
month = {09},
abstract = "{Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities do not necessarily coincide with token boundaries, rather, they respect morphological boundaries. To address NER in MRLs we then need to answer two fundamental questions, namely, what are the basic units to be labeled, and how can these units be detected and classified in realistic settings (i.e., where no gold morphology is available). We empirically investigate these questions on a novel NER benchmark, with parallel token- level and morpheme-level NER annotations, which we develop for Modern Hebrew, a morphologically rich-and-ambiguous language. Our results show that explicitly modeling morphological boundaries leads to improved NER performance, and that a novel hybrid architecture, in which NER precedes and prunes morphological decomposition, greatly outperforms the standard pipeline, where morphological decomposition strictly precedes NER, setting a new performance bar for both Hebrew NER and Hebrew morphological decomposition tasks.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00404},
url = {https://doi.org/10.1162/tacl\_a\_00404},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00404/1962472/tacl\_a\_00404.pdf},
}
```
##### [2]
Please cite the Hebrew Treebank as well, described the following paper:
```bibtex
@article{sima2001building,
title={Building a tree-bank of modern Hebrew text},
author={Sima’an, Khalil and Itai, Alon and Winter, Yoad and Altman, Alon and Nativ, Noa},
journal={Traitement Automatique des Langues},
volume={42},
number={2},
pages={247--380},
year={2001},
publisher={Citeseer}
}
```
##### [3]
The UD version of the Hebrew Treebank is described in:
```bibtex
@inproceedings{sade-etal-2018-hebrew,
title = "The {H}ebrew {U}niversal {D}ependency Treebank: Past Present and Future",
author = "Sade, Shoval and
Seker, Amit and
Tsarfaty, Reut",
booktitle = "Proceedings of the Second Workshop on Universal Dependencies ({UDW} 2018)",
month = nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W18-6016",
doi = "10.18653/v1/W18-6016",
pages = "133--143",
abstract = "The Hebrew treebank (HTB), consisting of 6221 morpho-syntactically annotated newspaper sentences, has been the only resource for training and validating statistical parsers and taggers for Hebrew, for almost two decades now. During these decades, the HTB has gone through a trajectory of automatic and semi-automatic conversions, until arriving at its UDv2 form. In this work we manually validate the UDv2 version of the HTB, and, according to our findings, we apply scheme changes that bring the UD HTB to the same theoretical grounds as the rest of UD. Our experimental parsing results with UDv2New confirm that improving the coherence and internal consistency of the UD HTB indeed leads to improved parsing performance. At the same time, our analysis demonstrates that there is more to be done at the point of intersection of UD with other linguistic processing layers, in particular, at the points where UD interfaces external morphological and lexical resources.",
}
``` |
launch/open_question_type | 2022-11-09T01:58:10.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | launch | Open-ended question type annotated dataset. | @inproceedings{cao-wang-2021-controllable,
title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
author = "Cao, Shuyang and
Wang, Lu",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.502",
doi = "10.18653/v1/2021.acl-long.502",
pages = "6424--6439",
abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
} | null | 0 | 3 | ---
annotations_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids: []
pretty_name: OpenQuestionType
---
# Dataset Card for OpenQuestionType
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://shuyangcao.github.io/projects/ontology_open_ended_question/](https://shuyangcao.github.io/projects/ontology_open_ended_question/)
- **Repository:** [https://github.com/ShuyangCao/open-ended_question_ontology](https://github.com/ShuyangCao/open-ended_question_ontology)
- **Paper:** [https://aclanthology.org/2021.acl-long.502/](https://aclanthology.org/2021.acl-long.502/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Question types annotated on open-ended questions.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"id": "123",
"question": "A test question?",
"annotator1": ["verification", None],
"annotator2": ["concept", None],
"resolve_type": "verification"
}
```
### Data Fields
- `id`: a `string` feature.
- `question`: a `string` feature.
- `annotator1`: a sequence feature containing two elements. The first one is the most confident label by the first annotator and the second one is the second-most confident label by the first annotator.
- `annotator2`: a sequence feature containing two elements. The first one is the most confident label by the second annotator and the second one is the second-most confident label by the second annotator.
- `resolve_type`: a `string` feature which is the final label after resolving disagreement.
### Data Splits
- train: 3716
- valid: 580
- test: 660
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Yahoo Answer and Reddit users.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{cao-wang-2021-controllable,
title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
author = "Cao, Shuyang and
Wang, Lu",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.502",
doi = "10.18653/v1/2021.acl-long.502",
pages = "6424--6439",
abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
}
```
|
ctu-aic/enfever_nli | 2022-06-29T13:05:10.000Z | [
"region:us"
] | ctu-aic | EnfeverNLI is a NLI version of the fever dataset | todo | null | 1 | 3 | Entry not found |
launch/reddit_qg | 2022-11-09T01:58:05.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | launch | Reddit question generation dataset. | @inproceedings{cao-wang-2021-controllable,
title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
author = "Cao, Shuyang and
Wang, Lu",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.502",
doi = "10.18653/v1/2021.acl-long.502",
pages = "6424--6439",
abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
} | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids: []
pretty_name: RedditQG
---
# Dataset Card for RedditQG
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://shuyangcao.github.io/projects/ontology_open_ended_question/](https://shuyangcao.github.io/projects/ontology_open_ended_question/)
- **Repository:** [https://github.com/ShuyangCao/open-ended_question_ontology](https://github.com/ShuyangCao/open-ended_question_ontology)
- **Paper:** [https://aclanthology.org/2021.acl-long.502/](https://aclanthology.org/2021.acl-long.502/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset contains answer-question pairs from QA communities of Reddit.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"id": "askscience/123",
"qid": "2323",
"answer": "A test answer.",
"question": "A test question?",
"score": 20
}
```
### Data Fields
- `id`: a `string` feature.
- `qid`: a `string` feature. There could be multiple answers to the same question.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `score`: an `int` feature which is the value of `upvotes - downvotes`.
### Data Splits
- train: 647763
- valid: 36023
- test: 36202
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Reddit users.
### Personal and Sensitive Information
Samples with abusive words are discarded, but there could be samples containing personal information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{cao-wang-2021-controllable,
title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
author = "Cao, Shuyang and
Wang, Lu",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.502",
doi = "10.18653/v1/2021.acl-long.502",
pages = "6424--6439",
abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
}
```
|
joelniklaus/german_argument_mining | 2022-09-22T13:44:35.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0... | joelniklaus | null | null | null | 3 | 3 | ---
annotations_creators:
- expert-generated
- found
language_creators:
- found
language:
- de
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Annotated German Legal Decision Corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for Annotated German Legal Decision Corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://zenodo.org/record/3936490#.X1ed7ovgomK
- **Paper:** Urchs., S., Mitrović., J., & Granitzer., M. (2021). Design and Implementation of German Legal Decision
Corpora. Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
515–521. https://doi.org/10.5220/0010187305150521
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset consists of 200 randomly chosen judgments. In these judgments a legal expert annotated the components
conclusion, definition and subsumption of the German legal writing style Urteilsstil.
*"Overall 25,075 sentences are annotated. 5% (1,202) of these sentences are marked as conclusion, 21% (5,328) as
definition, 53% (13,322) are marked as subsumption and the remaining 21% (6,481) as other. The length of judgments in
sentences ranges from 38 to 862 sentences. The median of judgments have 97 sentences, the length of most judgments is on
the shorter side."* (Urchs. et al., 2021)
*"Judgments from 22 of the 131 courts are selected for the corpus. Most judgments originate from the VG Augsburg (59 /
30%) followed by the VG Ansbach (39 / 20%) and LSG Munich (33 / 17%)."* (Urchs. et al., 2021)
*"29% (58) of all selected judgments are issued in the year 2016, followed by 22% (44) from the year 2017 and 21% (41)
issued in the year 2015. [...] The percentages of selected judgments and decisions issued in 2018 and 2019 are roughly
the same. No judgments from 2020 are selected."* (Urchs. et al., 2021)
### Supported Tasks and Leaderboards
The dataset can be used for multi-class text classification tasks, more specifically, for argument mining.
### Languages
The language in the dataset is German as it is used in Bavarian courts in Germany.
## Dataset Structure
### Data Instances
Each sentence is saved as a json object on a line in one of the three files `train.jsonl`, `validation.jsonl`
or `test.jsonl`. The file `meta.jsonl` contains meta information for each court. The `file_number` is present in all
files for identification. Each sentence of the court decision was categorized according to its function.
### Data Fields
The file `meta.jsonl` contains for each row the following fields:
- `meta_title`: Title provided by the website, it is used for saving the decision
- `court`: Issuing court
- `decision_style`: Style of the decision; the corpus contains either *Urteil* (='judgment') or *Endurteil* (
='end-judgment')
- `date`: Date when the decision was issued by the court
- `file_number`: Identification number used for this decision by the court
- `title`: Title provided by the court
- `norm_chains`: Norms related to the decision
- `decision_guidelines`: Short summary of the decision
- `keywords`: Keywords associated with the decision
- `lower_court`: Court that decided on the decision before
- `additional_information`: Additional Information
- `decision_reference`: References to the location of the decision in beck-online
- `tenor`: Designation of the legal consequence ordered by the court (list of paragraphs)
- `legal_facts`: Facts that form the base for the decision (list of paragraphs)
The files `train.jsonl`, `validation.jsonl` and `test.jsonl` contain the following fields:
- `file_number`: Identification number for linkage with the file `meta.jsonl`
- `input_sentence`: The sentence to be classified
- `label`: In depth explanation of the court decision. Each sentence is assigned to one of the major components of
German *Urteilsstil* (Urchs. et al., 2021) (list of paragraphs, each paragraph containing list of sentences, each
sentence annotated with one of the following four labels):
- `conclusion`: Overall result
- `definition`: Abstract legal facts and consequences
- `subsumption`: Determination sentence / Concrete facts
- `other`: Anything else
- `context_before`: Context in the same paragraph before the input_sentence
- `context_after`: Context in the same paragraph after the input_sentence
### Data Splits
No split provided in the original release.
Splits created by Joel Niklaus. We randomly split the dataset into 80% (160 decisions, 19271 sentences) train, 10%
validation (20 decisions, 2726 sentences) and 10% test (20 decisions, 3078 sentences). We made sure, that a decision
only occurs in one split and is not dispersed over multiple splits.
Label Distribution
| label | train | validation | test |
|:---------------|-----------:|-------------:|----------:|
| conclusion | 975 | 115 | 112 |
| definition | 4105 | 614 | 609 |
| subsumption | 10034 | 1486 | 1802 |
| other | 4157 | 511 | 555 |
| total | **19271** | **2726** | **3078** |
## Dataset Creation
### Curation Rationale
Creating a publicly available German legal text corpus consisting of judgments that have been annotated by a legal
expert. The annotated components consist of *conclusion*, *definition* and *subsumption* of the German legal writing
style *Urteilsstil*.
### Source Data
#### Initial Data Collection and Normalization
*“The decision corpus is a collection of the decisions published on the website www.gesetze-bayern.de. At the time of
the crawling the website offered 32,748 decisions of 131 Bavarian courts, dating back to 2015. The decisions are
provided from the Bavarian state after the courts agreed to a publication. All decisions are processed by the publisher
C.H.BECK, commissioned by the Bavarian state. This processing includes anonymisation, key-wording, and adding of
editorial guidelines to the decisions.”* (Urchs. et al., 2021)
#### Who are the source language producers?
German courts from Bavaria
### Annotations
#### Annotation process
*“As stated above, the judgment corpus consist of 200 randomly chosen judgments that are annotated by a legal expert,
who holds a first legal state exam. Due to financial, staff and time reasons the presented iteration of the corpus was
only annotated by a single expert. In a future version several other experts will annotate the corpus and the
inter-annotator agreement will be calculated.”* (Urchs. et al., 2021)
#### Who are the annotators?
A legal expert, who holds a first legal state exam.
### Personal and Sensitive Information
*"All decisions are processed by the publisher C.H.BECK, commissioned by the Bavarian state. This processing includes **
anonymisation**, key-wording, and adding of editorial guidelines to the decisions.”* (Urchs. et al., 2021)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The SoMaJo Sentence Splitter has been used. Upon manual inspection of the dataset, we could see that the sentence
splitter had poor accuracy in some cases (see ```analyze_dataset()``` in ```convert_to_hf_dataset.py```). When creating
the splits, we thought about merging small sentences with their neighbors or removing them all together. However, since
we could not find an straightforward way to do this, we decided to leave the dataset content untouched.
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
## Additional Information
### Dataset Curators
The names of the original dataset curators and creators can be found in references given below, in the section *Citation
Information*. Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch)
; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch)
; [Github](https://github.com/kapllan)).
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```
@dataset{urchs_stefanie_2020_3936490,
author = {Urchs, Stefanie and
Mitrović, Jelena},
title = {{German legal jugements annotated with judement
style components}},
month = jul,
year = 2020,
publisher = {Zenodo},
doi = {10.5281/zenodo.3936490},
url = {https://doi.org/10.5281/zenodo.3936490}
}
```
```
@conference{icaart21,
author = {Urchs., Stefanie and Mitrovi{\'{c}}., Jelena and Granitzer., Michael},
booktitle = {Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
doi = {10.5220/0010187305150521},
isbn = {978-989-758-484-8},
issn = {2184-433X},
organization = {INSTICC},
pages = {515--521},
publisher = {SciTePress},
title = {{Design and Implementation of German Legal Decision Corpora}},
year = {2021}
}
```
### Contributions
Thanks to [@kapllan](https://github.com/kapllan) and [@joelniklaus](https://github.com/joelniklaus) for adding this
dataset.
|
shahidul034/text_generation_model_data2 | 2022-07-03T12:15:05.000Z | [
"region:us"
] | shahidul034 | null | null | null | 0 | 3 | Entry not found |
shahidul034/text_generation_model_data3 | 2022-07-03T13:33:26.000Z | [
"region:us"
] | shahidul034 | null | null | null | 0 | 3 | Entry not found |
shahidul034/text_generation_model_data6 | 2022-07-03T16:26:12.000Z | [
"region:us"
] | shahidul034 | null | null | null | 0 | 3 | Entry not found |
Yincen/SalienceEvaluation | 2022-07-04T02:36:58.000Z | [
"task_categories:text-classification",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:zh",
"license:gpl-3.0",
"region:us"
] | Yincen | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language:
- zh
language_creators:
- found
license:
- gpl-3.0
multilinguality:
- monolingual
pretty_name: Yincen/SalienceEvaluation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-input-text-classification
---
# Dataset Card for Yincen/SalienceEvaluation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/qyccc) for adding this dataset. |
shahidul034/text_generation_model_data7 | 2022-07-04T04:26:21.000Z | [
"region:us"
] | shahidul034 | null | null | null | 0 | 3 | Entry not found |
CShorten/ArXiv-ML-Abstract-Embeddings | 2022-07-04T13:13:37.000Z | [
"region:us"
] | CShorten | null | null | null | 1 | 3 | This dataset contains embeddings of the abstracts of ArXiv Machine Learning papers.
The embeddings are produced from sentence-transformers/paraphrase-MiniLM-L6-v2. The model can be accessed here: <a href = "https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2/discussions/2">HuggingFace Sentence Transformers </a>
The original dataset before embedding can be accessed here: <a href = "https://huggingface.co/datasets/CShorten/ML-ArXiv-Papers">ML ArXiv Papers</a> |
shahidul034/text_generation_model_data9 | 2022-07-04T14:03:27.000Z | [
"region:us"
] | shahidul034 | null | null | null | 0 | 3 | Entry not found |
lyakaap/laion-mini-ja | 2022-07-05T02:30:45.000Z | [
"region:us"
] | lyakaap | null | null | null | 1 | 3 | #samples=5007831
```
dataset = load_dataset('lyakaap/laion2B-japanese-subset', split='train')
dataset = dataset.remove_columns(['LANGUAGE', 'NSFW', 'LICENSE', 'SAMPLE_ID'])
dataset = dataset.filter(lambda x: x['HEIGHT'] <= 384 and x['WIDTH'] <= 384)
dataset = dataset.filter(lambda x: x['HEIGHT'] >= 128 and x['WIDTH'] >= 128)
dataset = dataset.filter(lambda x: x['similarity'] >= 0.31)
dataset.push_to_hub('lyakaap/laion-mini-ja', token='XXX')
``` |
Paul/hatecheck-portuguese | 2022-07-05T10:27:47.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | null | 2 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Portuguese HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. |
Paul/hatecheck-polish | 2022-07-05T10:26:41.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- pl
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Polish HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. |
Paul/hatecheck-mandarin | 2022-07-05T10:32:33.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- zh
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Mandarin HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. |
Paul/hatecheck-italian | 2022-07-05T10:35:17.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:it",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- it
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Italian HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. |
Paul/hatecheck-hindi | 2022-07-05T10:36:37.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:hi",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | null | 0 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- hi
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Hindi HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. |
Paul/hatecheck-dutch | 2022-07-05T10:41:31.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:nl",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- nl
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Dutch HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. |
Heriot-WattUniversity/dialog_babi | 2022-07-12T08:27:12.000Z | [
"arxiv:1605.07683",
"arxiv:1502.05698",
"region:us"
] | Heriot-WattUniversity | This section presents the set of 6 tasks for testing end-to-end dialog systems in the restaurant domain described in the paper:
Antoine Bordes, Y-Lan Boureau, Jason Weston, Learning End-to-End Goal-Oriented Dialog, arxiv:1605.07683.
Each task tests a unique aspect of dialog. Tasks are designed to complement the set of 20 bAbI tasks for story understanding of the previous section.
For each task, there are 1000 dialogs for training, 1000 for development and 1000 for testing. For tasks 1-5, we also include a second test set (with suffix -OOV.txt) that contains dialogs including entities not present in training and development sets. | @article{bordes2016learning,
title={Learning end-to-end goal-oriented dialog},
author={Bordes, Antoine and Boureau, Y-Lan and Weston, Jason},
journal={arXiv preprint arXiv:1605.07683},
year={2016}
} | null | 1 | 3 | # Dialog bAbI tasks data
In this directory is the set of 6 tasks for testing end-to-end dialog systems in the restaurant domain as described in the paper "Learning End-to-End Goal-Oriented Dialog" by Bordes & Weston (http://arxiv.org/abs/1605.07683). The aim is that each task tests a unique aspect of dialog. Tasks are designed to complement the set of 20 bAbI tasks for story understanding already released with the paper "Towards AI Complete Question Answering: A Set of Prerequisite Toy Tasks" by Weston et al. (http://arxiv.org/abs/1502.05698).
## Data
For each task, there are 1000 dialogs for training, 1000 for development and 1000 for testing. For tasks 1-5, we also include a second test set (with suffix -OOV.txt) that contains dialogs including entities not present in training and development sets.
The file format for each task is as follows:
`ID user_utterance [tab] bot_utterances`
The IDs for a given dialog start at 1 and increase. When the IDs in a file reset back to 1 you can consider the following sentences as a new dialog. When the bot speaks two times in a row, we used the special token "<SILENCE>" to fill in for the missing user utterance.
For example (for task 1):
```
1 hi hello what can i help you with today
2 can you make a restaurant reservation with italian cuisine for six people in a cheap price range i'm on it
3 <SILENCE> where should it be
4 rome please ok let me look into some options for you
5 <SILENCE> api_call italian rome six cheap
```
The goal of the tasks is to predict the bot utterances, that can be sentences or API calls (sentences starting with the special token "api_call").
Along with the train, dev and test sets, we also include a knowledge base file (dialog-babi-kb-all.txt) that contain all entities appearing in dialogs for tasks 1-5. We also include a file containing the candidates to select the answer from (dialog-babi-candidates.txt) for tasks 1-5, that is simply made of all the bot utterances in train, dev, test for these tasks.
Task 6 is a bit different since its data comes from the Dialog State Tracking Challenge 2 (http://camdial.org/~mh521/dstc/), which we modified to convert it into the same format as the other tasks. There is no OOV test set associated with this task and the knowledge base (dialog-babi-task6-dstc2-kb.txt) is imperfect. This task has its own candidates file (dialog-babi-task6-dstc2-candidates.txt).
## License
This dataset is released under Creative Commons Attribution 3.0 Unported license. A copy of this license is included with the data.
## Contact
The author of this porting is Alessandro Suglia and he has only made available the dataset via
Huggingface datasets.
For more details on the dataset and baselines, see the paper "Learning End-to-End Goal-Oriented Dialog" by Antoine Bordes and Jason Weston (http://arxiv.org/abs/1605.07683). For any information, contact Antoine Bordes : abordes (at) fb (dot) com .
|
biglam/atypical_animacy | 2022-07-22T17:29:12.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:c... | biglam | Atypical animacy detection dataset, based on nineteenth-century sentences in English extracted from an open dataset of nineteenth-century books digitized by the British Library (available via https://doi.org/10.21250/db14, British Library Labs, 2014).
This dataset contains 598 sentences containing mentions of machines. Each sentence has been annotated according to the animacy and humanness of the machine in the sentence. | @article{DBLP:journals/corr/abs-2005-11140,
author = {Mariona Coll Ardanuy and
Federico Nanni and
Kaspar Beelen and
Kasra Hosseini and
Ruth Ahnert and
Jon Lawrence and
Katherine McDonough and
Giorgia Tolfo and
Daniel C. S. Wilson and
Barbara McGillivray},
title = {Living Machines: {A} study of atypical animacy},
journal = {CoRR},
volume = {abs/2005.11140},
year = {2020},
url = {https://arxiv.org/abs/2005.11140},
eprinttype = {arXiv},
eprint = {2005.11140},
timestamp = {Sat, 23 Jan 2021 01:12:25 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-11140.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 3 | 3 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- machine-generated
license:
- cc0-1.0
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: Atypical Animacy
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- intent-classification
---
# Dataset Card for atypical_animacy
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://bl.iro.bl.uk/concern/datasets/323177af-6081-4e93-8aaf-7932ca4a390a?locale=en
- **Repository:** https://github.com/Living-with-machines/AtypicalAnimacy
- **Paper:** https://arxiv.org/abs/2005.11140
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Mariona Coll Ardanuy](mailto:mcollardanuy@turing.ac.uk), [Daniel CS Wilson](mailto:dwilson@turing.ac.uk)
### Dataset Summary
Atypical animacy detection dataset, based on nineteenth-century sentences in English extracted from an open dataset of nineteenth-century books digitized by the British Library. This dataset contains 598 sentences containing mentions of machines. Each sentence has been annotated according to the animacy and humanness of the machine in the sentence.
### Supported Tasks and Leaderboards
- `text-classification` - This dataset can be used to determine if a mention of an entity in a document was humanlike or not
- `entity-recognition` - The dataset can be used to fine tune large models for NER, albeit for a very specific use case
### Languages
The text in the dataset is in English, as written by authors of books digitized by the British Library. The associated BCP-47 code in `en`
## Dataset Structure
The dataset has a single configuration
### Data Instances
An example data point
```
{'id': '002757962_01_184_16',
'sentence': '100 shows a Cornish boiler improperly seated with one small side flue and a bottom flue.',
'context': 'Fig. 100 shows a Cornish boiler improperly seated with one small side flue and a bottom flue. The effect of this on a long boiler is to cause springing and leakage of the seams from the heat being applied to one side of the boiler only.',
'target': 'boiler',
'animacy': 0.0,
'humanness': 1.0,
'offsets': [20, 26],
'date': '1893'}
```
### Data Fields
- id: sentence identifier according to internal Living with Machines BL books indexing.
- sentence: sentence where target expression occurs.
- context: sentence where target expression occurs, plus one sentence to the left and one sentence to the right.
- target: target expression
- animacy: animacy of the target expression
- humanness: humanness of the target expression
### Data Splits
Train | 598
## Dataset Creation
The dataset was created by manually annotating books that had been digitized by the British Library. According to the paper's authors,
> "we provide a basis for examining how machines were imagined during the nineteenth century as everything from lifeless mechanical objects to living beings, or even human-like agents that feel, think, and love. We focus on texts from nineteenth-century Britain, a society being transformed by industrialization, as a good candidate for studying the broader issue"
### Curation Rationale
From the paper:
> The Stories dataset is largely composed of target expressions that correspond to either typically animate or typically inanimate entities. Even though some cases of unconventional animacy can be found(folktales, in particular, are richer in typically inanimate entities that become animate), these accountfor a very small proportion of the data.6 We decided to create our own dataset (henceforth 19thC Machines dataset) to gain a better sense of the suitability of our method to the problem of atypical animacy detection, with particular attention to the case of animacy of machines in nineteenth-century texts.
### Source Data
#### Initial Data Collection and Normalization
The dataset was generated by manually annotating books that have been digitized by the British Library
#### Who are the source language producers?
The data was originally produced by British authors in the 19th century. The books were then digitzed whcih produces some noise due to the OCR method. The annotators are from The Alan Turing Institute, The British Library, University of Cambridge, University of Exeter and Queen Mary University of London
### Annotations
#### Annotation process
Annotation was carried out in two parts.
For the intial annotation process, from the paper:
> "For human annotators, even history and literature experts, language subtleties made this task extremely subjective. In the first task, we masked the target word (i.e. the machine) in each sentence and asked the annotator to fill the slot with the most likely entity between ‘human’, ‘horse’, and ‘machine’, representing three levels in the animacy hierarchy: human, animal, and object (Comrie, 1989, 185). We asked annotators to stick to the most literal meaning and avoid metaphorical interpretations when possible. The second task was more straightforwardly related to determining the animacy of the target entity, given the same 100 sentences. We asked annotators to provide a score between -2 and 2, with -2 being definitely inanimate, -1 possibly inanimate, 1 possibly animate, and 2 definitely animate. Neutral judgements were not allowed. "
For the final annotations, from the paper:
> A subgroup of five annotators collaboratively wrote the guidelines based on their experience annotating the first batch of sentences, taking into account common discrepancies. After discussion, it was decided that a machine would be tagged as animate if it is described as having traits distinctive of biologically animate beings or human-specific skills, or portrayed as having feelings, emotions, or a soul. Sentences like the ones in example 2 would be considered animate, but an additional annotation layer would be provided to capture the notion of humanness, which would be true if the machine is portrayed as sentient and capable of specifically human emotions, and false if it used to suggest some degree of dehumanization.
#### Who are the annotators?
Annotations were carried out by the following people
- Giorgia Tolfo
- Ruth Ahnert
- Kaspar Beelen
- Mariona Coll Ardanuy
- Jon Lawrence
- Katherine McDonough
- Federico Nanni
- Daniel CS Wilson
### Personal and Sensitive Information
This dataset does not have any personal information since they are digitizations of books from the 19th century. Some passages might be sensitive, but it is not explicitly mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The curators for this dataset are:
- Kaspar Beelen
- Mariona Coll Ardanuy
- Federico Nanni
- Giorgia Tolfo
### Licensing Information
CC0 1.0 Universal Public Domain
### Citation Information
```
@article{DBLP:journals/corr/abs-2005-11140,
author = {Mariona Coll Ardanuy and
Federico Nanni and
Kaspar Beelen and
Kasra Hosseini and
Ruth Ahnert and
Jon Lawrence and
Katherine McDonough and
Giorgia Tolfo and
Daniel C. S. Wilson and
Barbara McGillivray},
title = {Living Machines: {A} study of atypical animacy},
journal = {CoRR},
volume = {abs/2005.11140},
year = {2020},
url = {https://arxiv.org/abs/2005.11140},
eprinttype = {arXiv},
eprint = {2005.11140},
timestamp = {Sat, 23 Jan 2021 01:12:25 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-11140.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Bingsu/KcBERT_Pre-Training_Corpus | 2022-07-13T07:26:02.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"langu... | Bingsu | null | null | null | 0 | 3 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: KcBERT Pre-Training Corpus (Korean News Comments)
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
---
# KcBERT Pre-Training Corpus (Korean News Comments)
## Dataset Description
- **Homepage:** [KcBERT Pre-Training Corpus](https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments)
- **Repository:** [Beomi/KcBERT](https://github.com/Beomi/KcBERT)
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
## KcBERT
[beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base)
Github KcBERT Repo: [https://github.com/Beomi/KcBERT](https://github.com/Beomi/KcBERT)
KcBERT is Korean Comments BERT pretrained on this Corpus set.
(You can use it via Huggingface's Transformers library!)
This Kaggle Dataset contains **CLEANED** dataset preprocessed with the code below.
```python
import re
import emoji
from soynlp.normalizer import repeat_normalize
emojis = ''.join(emoji.UNICODE_EMOJI.keys())
pattern = re.compile(f'[^ .,?!/@$%~%·∼()\x00-\x7Fㄱ-힣{emojis}]+')
url_pattern = re.compile(
r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)')
def clean(x):
x = pattern.sub(' ', x)
x = url_pattern.sub('', x)
x = x.strip()
x = repeat_normalize(x, num_repeats=2)
return x
```
### License
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
## Dataset Structure
### Data Instance
```pycon
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/KcBERT_Pre-Training_Corpus")
>>> dataset
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 86246285
})
})
```
### Data Size
download: 7.90 GiB<br>
generated: 11.86 GiB<br>
total: 19.76 GiB
※ You can download this dataset from [kaggle](https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments), and it's 5 GiB. (12.48 GiB when uncompressed)
### Data Fields
- text: `string`
### Data Splits
| | train |
| ---------- | -------- |
| # of texts | 86246285 |
|
ArthurBaia/squad_v1_pt_br | 2022-11-09T15:34:43.000Z | [
"region:us"
] | ArthurBaia | This dataset was translated by Deep Learning Brazil | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | null | 3 | 3 | This dataset was created by Deep Learning Brasil(www.deeplearningbrasil.com.br). I just published it on Hugging Face hub with the intention to share it with more people that are training brazilian portuguese models. The original link is here drive.google.com/file/d/1Q0IaIlv2h2BC468MwUFmUST0EyN7gNkn/view. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.