id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
growth-cadet/news_to_json
2023-10-09T21:07:55.000Z
[ "region:us" ]
growth-cadet
null
null
null
0
5
--- dataset_info: features: - name: instruction dtype: string - name: context dtype: string - name: response dtype: string splits: - name: train num_bytes: 9718266 num_examples: 2007 download_size: 4579486 dataset_size: 9718266 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "news_to_json" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bnl_newspapers
2023-01-25T14:27:26.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar"...
null
Digitised historic newspapers from the Bibliothèque nationale (BnL) - the National Library of Luxembourg.
@misc{bnl_newspapers, title={Historical Newspapers}, url={https://data.bnl.lu/data/historical-newspapers/}, author={ Bibliothèque nationale du Luxembourg},
null
1
4
--- annotations_creators: - no-annotation language_creators: - found language: - ar - da - de - fi - fr - lb - nl - pt license: - cc0-1.0 multilinguality: - multilingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling pretty_name: BnL Historical Newspapers dataset_info: features: - name: id dtype: string - name: source dtype: string - name: url dtype: string - name: title dtype: string - name: ispartof dtype: string - name: text dtype: string - name: pub_date dtype: timestamp[s] - name: publisher dtype: string - name: language dtype: string - name: article_type dtype: class_label: names: '0': ADVERTISEMENT_SECTION '1': BIBLIOGRAPHY '2': CHAPTER '3': INDEX '4': CONTRIBUTION '5': TABLE_OF_CONTENTS '6': WEATHER '7': SHIPPING '8': SECTION '9': ARTICLE '10': TITLE_SECTION '11': DEATH_NOTICE '12': SUPPLEMENT '13': TABLE '14': ADVERTISEMENT '15': CHART_DIAGRAM '16': ILLUSTRATION '17': ISSUE - name: extent dtype: int32 config_name: processed splits: - name: train num_bytes: 1611620156 num_examples: 537558 download_size: 1224029060 dataset_size: 1611620156 --- # Dataset Card for BnL Historical Newspapers ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://data.bnl.lu/data/historical-newspapers/ - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** opendata@bnl.etat.lu ### Dataset Summary The BnL has digitised over 800.000 pages of Luxembourg newspapers. This dataset currently has one configuration covering a subset of these newspapers, which sit under the "Processed Datasets" collection. The BNL: > processed all newspapers and monographs that are in the public domain and extracted the full text and associated meta data of every single article, section, advertisement… The result is a large number of small, easy to use XML files formatted using Dublin Core. [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure The dataset currently contains a single configuration. ### Data Instances An example instance from the datasets: ``` python {'id': 'https://persist.lu/ark:/70795/wx8r4c/articles/DTL47', 'article_type': 8, 'extent': 49, 'ispartof': 'Luxemburger Wort', 'pub_date': datetime.datetime(1853, 3, 23, 0, 0), 'publisher': 'Verl. der St-Paulus-Druckerei', 'source': 'newspaper/luxwort/1853-03-23', 'text': 'Asien. Eine neue Nedcrland-Post ist angekommen mil Nachrichten aus Calcutta bis zum 5. Febr.; Vom» vay, 12. Febr. ; Nangun und HongKong, 13. Jan. Die durch die letzte Post gebrachle Nachricht, der König von Ava sei durch seinen Bruder enlhronl worden, wird bestätigt. (K. Z.) Verantwortl. Herausgeber, F. Schümann.', 'title': 'Asien.', 'url': 'http://www.eluxemburgensia.lu/webclient/DeliveryManager?pid=209701#panel:pp|issue:209701|article:DTL47', 'language': 'de' } ``` ### Data Fields - 'id': This is a unique and persistent identifier using ARK. - 'article_type': The type of the exported data, possible values ('ADVERTISEMENT_SECTION', 'BIBLIOGRAPHY', 'CHAPTER', 'INDEX', 'CONTRIBUTION', 'TABLE_OF_CONTENTS', 'WEATHER', 'SHIPPING', 'SECTION', 'ARTICLE', 'TITLE_SECTION', 'DEATH_NOTICE', 'SUPPLEMENT', 'TABLE', 'ADVERTISEMENT', 'CHART_DIAGRAM', 'ILLUSTRATION', 'ISSUE') - 'extent': The number of words in the text field - 'ispartof: The complete title of the source document e.g. “Luxemburger Wort”. - 'pub_date': The publishing date of the document e.g “1848-12-15” - 'publisher':The publisher of the document e.g. “Verl. der St-Paulus-Druckerei”. - 'source': Describes the source of the document. For example <dc:source>newspaper/luxwort/1848-12-15</dc:source> means that this article comes from the newspaper “luxwort” (ID for Luxemburger Wort) issued on 15.12.1848. - 'text': The full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines. - 'title': The main title of the article, section, advertisement, etc. - 'url': The link to the BnLViewer on eluxemburgensia.lu to view the resource online. - 'language': The language of the text, possible values ('ar', 'da', 'de', 'fi', 'fr', 'lb', 'nl', 'pt') ### Data Splits This dataset contains a single split `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{bnl_newspapers, title={Historical Newspapers}, url={https://data.bnl.lu/data/historical-newspapers/}, author={ Bibliothèque nationale du Luxembourg}, ``` ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
cail2018
2022-11-18T19:24:58.000Z
[ "task_categories:other", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:zh", "license:unknown", "judgement-prediction", "arxiv:1807.02478", "region:us" ]
null
In this paper, we introduce Chinese AI and Law challenge dataset (CAIL2018), the first large-scale Chinese legal dataset for judgment prediction. CAIL contains more than 2.6 million criminal cases published by the Supreme People's Court of China, which are several times larger than other datasets in existing works on judgment prediction. Moreover, the annotations of judgment results are more detailed and rich. It consists of applicable law articles, charges, and prison terms, which are expected to be inferred according to the fact descriptions of cases. For comparison, we implement several conventional text classification baselines for judgment prediction and experimental results show that it is still a challenge for current models to predict the judgment results of legal cases, especially on prison terms. To help the researchers make improvements on legal judgment prediction.
@misc{xiao2018cail2018, title={CAIL2018: A Large-Scale Legal Dataset for Judgment Prediction}, author={Chaojun Xiao and Haoxi Zhong and Zhipeng Guo and Cunchao Tu and Zhiyuan Liu and Maosong Sun and Yansong Feng and Xianpei Han and Zhen Hu and Heng Wang and Jianfeng Xu}, year={2018}, eprint={1807.02478}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
7
4
--- annotations_creators: - found language_creators: - found language: - zh license: - unknown multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: chinese-ai-and-law-cail-2018 pretty_name: CAIL 2018 tags: - judgement-prediction dataset_info: features: - name: fact dtype: string - name: relevant_articles sequence: int32 - name: accusation sequence: string - name: punish_of_money dtype: float32 - name: criminals sequence: string - name: death_penalty dtype: bool - name: imprisonment dtype: float32 - name: life_imprisonment dtype: bool splits: - name: exercise_contest_train num_bytes: 220112732 num_examples: 154592 - name: exercise_contest_valid num_bytes: 21702157 num_examples: 17131 - name: exercise_contest_test num_bytes: 41057634 num_examples: 32508 - name: first_stage_train num_bytes: 1779657510 num_examples: 1710856 - name: first_stage_test num_bytes: 244335194 num_examples: 217016 - name: final_test num_bytes: 44194707 num_examples: 35922 download_size: 984551626 dataset_size: 2351059934 --- --- # Dataset Card for CAIL 2018 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/thunlp/CAIL/blob/master/README_en.md) - **Repository:** [Github](https://github.com/thunlp/CAIL) - **Paper:** [Arxiv](https://arxiv.org/abs/1807.02478) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset.
caner
2023-03-16T14:47:48.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "license:unknown", "region:us" ]
null
Classical Arabic Named Entity Recognition corpus as a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities.
@article{article, author = {Salah, Ramzi and Zakaria, Lailatul}, year = {2018}, month = {12}, pages = {}, title = {BUILDING THE CLASSICAL ARABIC NAMED ENTITY RECOGNITION CORPUS (CANERCORPUS)}, volume = {96}, journal = {Journal of Theoretical and Applied Information Technology} }
null
1
4
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - ar license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: CANER dataset_info: features: - name: token dtype: string - name: ner_tag dtype: class_label: names: '0': Allah '1': Book '2': Clan '3': Crime '4': Date '5': Day '6': Hell '7': Loc '8': Meas '9': Mon '10': Month '11': NatOb '12': Number '13': O '14': Org '15': Para '16': Pers '17': Prophet '18': Rlig '19': Sect '20': Time splits: - name: train num_bytes: 5095721 num_examples: 258240 download_size: 17063406 dataset_size: 5095721 --- # Dataset Card for CANER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [Classical-Arabic-Named-Entity-Recognition-Corpus](https://github.com/RamziSalah) - **Paper:** [Researchgate](https://www.researchgate.net/publication/330075080_BUILDING_THE_CLASSICAL_ARABIC_NAMED_ENTITY_RECOGNITION_CORPUS_CANERCORPUS) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Classical Arabic Named Entity Recognition corpus is a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities. ### Supported Tasks and Leaderboards - Named Entity Recognition ### Languages Classical Arabic ## Dataset Structure ### Data Instances An example from the dataset: ``` {'ner_tag': 1, 'token': 'الجامع'} ``` Where 1 stands for "Book" ### Data Fields - `id`: id of the sample - `token`: the tokens of the example text - `ner_tag`: the NER tags of each token The NER tags correspond to this list: ``` "Allah", "Book", "Clan", "Crime", "Date", "Day", "Hell", "Loc", "Meas", "Mon", "Month", "NatOb", "Number", "O", "Org", "Para", "Pers", "Prophet", "Rlig", "Sect", "Time" ``` ### Data Splits Training splits only ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? Ramzi Salah and Lailatul Qadri Zakaria ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information [More Information Needed] ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @article{article, author = {Salah, Ramzi and Zakaria, Lailatul}, year = {2018}, month = {12}, pages = {}, title = {BUILDING THE CLASSICAL ARABIC NAMED ENTITY RECOGNITION CORPUS (CANERCORPUS)}, volume = {96}, journal = {Journal of Theoretical and Applied Information Technology} } ### Contributions Thanks to [@KMFODA](https://github.com/KMFODA) for adding this dataset.
com_qa
2023-06-27T07:38:08.000Z
[ "task_categories:question-answering", "language:en", "region:us" ]
null
ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website. By collecting questions from such a site we ensure that the information needs are ones of interest to actual users. Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them more interesting for driving future research compared to those collected from an engine's query log. The dataset contains questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives, superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization.
@inproceedings{abujabal-etal-2019-comqa, title = "{ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters", author = {Abujabal, Abdalghani and Saha Roy, Rishiraj and Yahya, Mohamed and Weikum, Gerhard}, booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)}, month = {jun}, year = {2019}, address = {Minneapolis, Minnesota}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/N19-1027}, doi = {10.18653/v1/N19-1027{, pages = {307--317}, }
null
2
4
--- language: - en paperswithcode_id: comqa pretty_name: ComQA dataset_info: features: - name: cluster_id dtype: string - name: questions sequence: string - name: answers sequence: string splits: - name: train num_bytes: 696645 num_examples: 3966 - name: test num_bytes: 273384 num_examples: 2243 - name: validation num_bytes: 131945 num_examples: 966 download_size: 1671684 dataset_size: 1101974 task_categories: - question-answering --- # Dataset Card for "com_qa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://qa.mpi-inf.mpg.de/comqa/](http://qa.mpi-inf.mpg.de/comqa/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.67 MB - **Size of the generated dataset:** 1.10 MB - **Total amount of disk used:** 2.78 MB ### Dataset Summary ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website. By collecting questions from such a site we ensure that the information needs are ones of interest to actual users. Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them more interesting for driving future research compared to those collected from an engine's query log. The dataset contains questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives, superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.67 MB - **Size of the generated dataset:** 1.10 MB - **Total amount of disk used:** 2.78 MB An example of 'validation' looks as follows. ``` { "answers": ["https://en.wikipedia.org/wiki/north_sea"], "cluster_id": "cluster-922", "questions": ["what sea separates the scandinavia peninsula from britain?", "which sea separates britain from scandinavia?"] } ``` ### Data Fields The data fields are the same among all splits. #### default - `cluster_id`: a `string` feature. - `questions`: a `list` of `string` features. - `answers`: a `list` of `string` features. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default| 3966| 966|2243| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{abujabal-etal-2019-comqa, title = "{ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters", author = {Abujabal, Abdalghani and Saha Roy, Rishiraj and Yahya, Mohamed and Weikum, Gerhard}, booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)}, month = {jun}, year = {2019}, address = {Minneapolis, Minnesota}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/N19-1027}, doi = {10.18653/v1/N19-1027{, pages = {307--317}, } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
compguesswhat
2023-04-05T10:02:19.000Z
[ "task_categories:visual-question-answering", "task_ids:visual-question-answering", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other-guesswhat", "language:en", "license:unknown", "region:...
null
CompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference scene is an image in VisualGenome. Visit the website for more details: https://compguesswhat.github.io
@inproceedings{suglia2020compguesswhat, title={CompGuessWhat?!: a Multi-task Evaluation Framework for Grounded Language Learning}, author={Suglia, Alessandro, Konstas, Ioannis, Vanzo, Andrea, Bastianelli, Emanuele, Desmond Elliott, Stella Frank and Oliver Lemon}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} }
null
1
4
--- annotations_creators: - machine-generated language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: CompGuessWhat?! size_categories: - 100K<n<1M source_datasets: - extended|other-guesswhat task_categories: - visual-question-answering task_ids: - visual-question-answering paperswithcode_id: compguesswhat dataset_info: - config_name: compguesswhat-original features: - name: id dtype: int32 - name: target_id dtype: int32 - name: timestamp dtype: string - name: status dtype: string - name: image struct: - name: id dtype: int32 - name: file_name dtype: string - name: flickr_url dtype: string - name: coco_url dtype: string - name: height dtype: int32 - name: width dtype: int32 - name: visual_genome struct: - name: width dtype: int32 - name: height dtype: int32 - name: url dtype: string - name: coco_id dtype: int32 - name: flickr_id dtype: string - name: image_id dtype: string - name: qas sequence: - name: question dtype: string - name: answer dtype: string - name: id dtype: int32 - name: objects sequence: - name: id dtype: int32 - name: bbox sequence: float32 length: 4 - name: category dtype: string - name: area dtype: float32 - name: category_id dtype: int32 - name: segment sequence: sequence: float32 splits: - name: train num_bytes: 126548689 num_examples: 46341 - name: validation num_bytes: 26055261 num_examples: 9738 - name: test num_bytes: 25981593 num_examples: 9621 download_size: 107201655 dataset_size: 178585543 - config_name: compguesswhat-zero_shot features: - name: id dtype: int32 - name: target_id dtype: string - name: status dtype: string - name: image struct: - name: id dtype: int32 - name: file_name dtype: string - name: coco_url dtype: string - name: height dtype: int32 - name: width dtype: int32 - name: license dtype: int32 - name: open_images_id dtype: string - name: date_captured dtype: string - name: objects sequence: - name: id dtype: string - name: bbox sequence: float32 length: 4 - name: category dtype: string - name: area dtype: float32 - name: category_id dtype: int32 - name: IsOccluded dtype: int32 - name: IsTruncated dtype: int32 - name: segment sequence: - name: MaskPath dtype: string - name: LabelName dtype: string - name: BoxID dtype: string - name: BoxXMin dtype: string - name: BoxXMax dtype: string - name: BoxYMin dtype: string - name: BoxYMax dtype: string - name: PredictedIoU dtype: string - name: Clicks dtype: string splits: - name: nd_valid num_bytes: 13557059 num_examples: 5343 - name: nd_test num_bytes: 36352201 num_examples: 13836 - name: od_valid num_bytes: 14093233 num_examples: 5372 - name: od_test num_bytes: 33049755 num_examples: 13300 download_size: 4845966 dataset_size: 97052248 --- # Dataset Card for "compguesswhat" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://compguesswhat.github.io/](https://compguesswhat.github.io/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 112.05 MB - **Size of the generated dataset:** 271.11 MB - **Total amount of disk used:** 383.16 MB ### Dataset Summary CompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference scene is an image in VisualGenome. Visit the website for more details: https://compguesswhat.github.io ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### compguesswhat-original - **Size of downloaded dataset files:** 107.21 MB - **Size of the generated dataset:** 174.37 MB - **Total amount of disk used:** 281.57 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "id": 2424, "image": "{\"coco_url\": \"http://mscoco.org/images/270512\", \"file_name\": \"COCO_train2014_000000270512.jpg\", \"flickr_url\": \"http://farm6.stat...", "objects": "{\"area\": [1723.5133056640625, 4838.5361328125, 287.44476318359375, 44918.7109375, 3688.09375, 522.1935424804688], \"bbox\": [[5.61...", "qas": { "answer": ["Yes", "No", "No", "Yes"], "id": [4983, 4996, 5006, 5017], "question": ["Is it in the foreground?", "Does it have wings?", "Is it a person?", "Is it a vehicle?"] }, "status": "success", "target_id": 1197044, "timestamp": "2016-07-08 15:07:38" } ``` #### compguesswhat-zero_shot - **Size of downloaded dataset files:** 4.84 MB - **Size of the generated dataset:** 96.74 MB - **Total amount of disk used:** 101.59 MB An example of 'nd_valid' looks as follows. ``` This example was too long and was cropped: { "id": 0, "image": { "coco_url": "https://s3.amazonaws.com/nocaps/val/004e21eb2e686f40.jpg", "date_captured": "2018-11-06 11:04:33", "file_name": "004e21eb2e686f40.jpg", "height": 1024, "id": 6, "license": 0, "open_images_id": "004e21eb2e686f40", "width": 768 }, "objects": "{\"IsOccluded\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], \"IsTruncated\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], \"area\": [3...", "status": "incomplete", "target_id": "004e21eb2e686f40_30" } ``` ### Data Fields The data fields are the same among all splits. #### compguesswhat-original - `id`: a `int32` feature. - `target_id`: a `int32` feature. - `timestamp`: a `string` feature. - `status`: a `string` feature. - `id`: a `int32` feature. - `file_name`: a `string` feature. - `flickr_url`: a `string` feature. - `coco_url`: a `string` feature. - `height`: a `int32` feature. - `width`: a `int32` feature. - `width`: a `int32` feature. - `height`: a `int32` feature. - `url`: a `string` feature. - `coco_id`: a `int32` feature. - `flickr_id`: a `string` feature. - `image_id`: a `string` feature. - `qas`: a dictionary feature containing: - `question`: a `string` feature. - `answer`: a `string` feature. - `id`: a `int32` feature. - `objects`: a dictionary feature containing: - `id`: a `int32` feature. - `bbox`: a `list` of `float32` features. - `category`: a `string` feature. - `area`: a `float32` feature. - `category_id`: a `int32` feature. - `segment`: a dictionary feature containing: - `feature`: a `float32` feature. #### compguesswhat-zero_shot - `id`: a `int32` feature. - `target_id`: a `string` feature. - `status`: a `string` feature. - `id`: a `int32` feature. - `file_name`: a `string` feature. - `coco_url`: a `string` feature. - `height`: a `int32` feature. - `width`: a `int32` feature. - `license`: a `int32` feature. - `open_images_id`: a `string` feature. - `date_captured`: a `string` feature. - `objects`: a dictionary feature containing: - `id`: a `string` feature. - `bbox`: a `list` of `float32` features. - `category`: a `string` feature. - `area`: a `float32` feature. - `category_id`: a `int32` feature. - `IsOccluded`: a `int32` feature. - `IsTruncated`: a `int32` feature. - `segment`: a dictionary feature containing: - `MaskPath`: a `string` feature. - `LabelName`: a `string` feature. - `BoxID`: a `string` feature. - `BoxXMin`: a `string` feature. - `BoxXMax`: a `string` feature. - `BoxYMin`: a `string` feature. - `BoxYMax`: a `string` feature. - `PredictedIoU`: a `string` feature. - `Clicks`: a `string` feature. ### Data Splits #### compguesswhat-original | |train|validation|test| |----------------------|----:|---------:|---:| |compguesswhat-original|46341| 9738|9621| #### compguesswhat-zero_shot | |nd_valid|od_valid|nd_test|od_test| |-----------------------|-------:|-------:|------:|------:| |compguesswhat-zero_shot| 5343| 5372| 13836| 13300| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{suglia2020compguesswhat, title={CompGuessWhat?!: a Multi-task Evaluation Framework for Grounded Language Learning}, author={Suglia, Alessandro, Konstas, Ioannis, Vanzo, Andrea, Bastianelli, Emanuele, Desmond Elliott, Stella Frank and Oliver Lemon}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@aleSuglia](https://github.com/aleSuglia), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
id_clickbait
2023-01-25T14:32:36.000Z
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:id", "license:cc-by-4.0", "region:us" ]
null
The CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news publishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo, Tribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii) 15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline. Judgment were based only on the headline. The majority then is considered as the ground truth. In the annotated sample, our annotation shows 6,290 clickbait and 8,710 non-clickbait.
@inproceedings{id_clickbait, author = {Andika William, Yunita Sari}, title = {CLICK-ID: A Novel Dataset for Indonesian Clickbait Headlines}, year = {2020}, url = {http://dx.doi.org/10.17632/k42j7x2kpn.1}, }
null
0
4
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - id license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - fact-checking pretty_name: Indonesian Clickbait Headlines dataset_info: - config_name: annotated features: - name: id dtype: string - name: title dtype: string - name: label dtype: class_label: names: '0': non-clickbait '1': clickbait splits: - name: train num_bytes: 1268698 num_examples: 15000 download_size: 150769127 dataset_size: 1268698 - config_name: raw features: - name: id dtype: string - name: title dtype: string - name: source dtype: string - name: date dtype: string - name: category dtype: string - name: sub-category dtype: string - name: content dtype: string - name: url dtype: string splits: - name: train num_bytes: 81669386 num_examples: 38655 download_size: 150769127 dataset_size: 81669386 --- # Dataset Card for Indonesian Clickbait Headlines ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://data.mendeley.com/datasets/k42j7x2kpn/1 - **Repository:** - **Paper:** [CLICK-ID: A Novel Dataset for Indonesian Clickbait Headlines](https://www.sciencedirect.com/science/article/pii/S2352340920311252#!) - **Leaderboard:** - **Point of Contact:** [Andika William](mailto:andika.william@mail.ugm.ac.id), [Yunita Sari](mailto:yunita.sari@ugm.ac.id) ### Dataset Summary The CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news publishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo, Tribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii) 15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline. Judgment were based only on the headline. The majority then is considered as the ground truth. In the annotated sample, our annotation shows 6,290 clickbait and 8,710 non-clickbait. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ### Data Instances An example of the annotated article: ``` { 'id': '100', 'label': 1, 'title': "SAH! Ini Daftar Nama Menteri Kabinet Jokowi - Ma'ruf Amin" } > ``` ### Data Fields #### Annotated - `id`: id of the sample - `title`: the title of the news article - `label`: the label of the article, either non-clickbait or clickbait #### Raw - `id`: id of the sample - `title`: the title of the news article - `source`: the name of the publisher/newspaper - `date`: date - `category`: the category of the article - `sub-category`: the sub category of the article - `content`: the content of the article - `url`: the url of the article ### Data Splits The dataset contains train set. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Creative Commons Attribution 4.0 International license ### Citation Information ``` @article{WILLIAM2020106231, title = "CLICK-ID: A novel dataset for Indonesian clickbait headlines", journal = "Data in Brief", volume = "32", pages = "106231", year = "2020", issn = "2352-3409", doi = "https://doi.org/10.1016/j.dib.2020.106231", url = "http://www.sciencedirect.com/science/article/pii/S2352340920311252", author = "Andika William and Yunita Sari", keywords = "Indonesian, Natural Language Processing, News articles, Clickbait, Text-classification", abstract = "News analysis is a popular task in Natural Language Processing (NLP). In particular, the problem of clickbait in news analysis has gained attention in recent years [1, 2]. However, the majority of the tasks has been focused on English news, in which there is already a rich representative resource. For other languages, such as Indonesian, there is still a lack of resource for clickbait tasks. Therefore, we introduce the CLICK-ID dataset of Indonesian news headlines extracted from 12 Indonesian online news publishers. It is comprised of 15,000 annotated headlines with clickbait and non-clickbait labels. Using the CLICK-ID dataset, we then developed an Indonesian clickbait classification model achieving favourable performance. We believe that this corpus will be useful for replicable experiments in clickbait detection or other experiments in NLP areas." } ``` ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
id_newspapers_2018
2022-11-03T16:16:15.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:id",...
null
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB, and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website
@inproceedings{id_newspapers_2018, author = {}, title = {Indonesian Newspapers 2018}, year = {2019}, url = {https://github.com/feryandi/Dataset-Artikel}, }
null
4
4
--- annotations_creators: - no-annotation language_creators: - found language: - id license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: Indonesian Newspapers 2018 dataset_info: features: - name: id dtype: string - name: url dtype: string - name: date dtype: string - name: title dtype: string - name: content dtype: string config_name: id_newspapers_2018 splits: - name: train num_bytes: 1116031922 num_examples: 499164 download_size: 446018349 dataset_size: 1116031922 --- # Dataset Card for Indonesian Newspapers 2018 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Indonesian Newspapers](https://github.com/feryandi/Dataset-Artikel) - **Repository:** [Indonesian Newspapers](https://github.com/feryandi/Dataset-Artikel) - **Paper:** - **Leaderboard:** - **Point of Contact:** [feryandi.n@gmail.com](mailto:feryandi.n@gmail.com), [cahya.wirawan@gmail.com](mailto:cahya.wirawan@gmail.com) ### Dataset Summary The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB, and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. A copy of the original dataset is available at https://cloud.uncool.ai/index.php/s/mfYEAgKQoY3ebbM ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ``` { 'id': 'string', 'url': 'string', 'date': 'string', 'title': 'string', 'content': 'string' } ``` ### Data Instances An instance from the dataset is ``` {'id': '0', 'url': 'https://www.cnnindonesia.com/olahraga/20161221234219-156-181385/lorenzo-ingin-samai-rekor-rossi-dan-stoner', 'date': '2016-12-22 07:00:00', 'title': 'Lorenzo Ingin Samai Rekor Rossi dan Stoner', 'content': 'Jakarta, CNN Indonesia -- Setelah bergabung dengan Ducati, Jorge Lorenzo berharap bisa masuk dalam jajaran pebalap yang mampu jadi juara dunia kelas utama dengan dua pabrikan berbeda. Pujian Max Biaggi untuk Valentino Rossi Jorge Lorenzo Hadir dalam Ucapan Selamat Natal Yamaha Iannone: Saya Sering Jatuh Karena Ingin yang Terbaik Sepanjang sejarah, hanya ada lima pebalap yang mampu jadi juara kelas utama (500cc/MotoGP) dengan dua pabrikan berbeda, yaitu Geoff Duke, Giacomo Agostini, Eddie Lawson, Valentino Rossi, dan Casey Stoner. Lorenzo ingin bergabung dalam jajaran legenda tersebut. “Fakta ini sangat penting bagi saya karena hanya ada lima pebalap yang mampu menang dengan dua pabrikan berbeda dalam sejarah balap motor.” “Kedatangan saya ke Ducati juga menghadirkan tantangan yang sangat menarik karena hampir tak ada yang bisa menang dengan Ducati sebelumnya, kecuali Casey Stoner. Hal itu jadi motivasi yang sangat bagus bagi saya,” tutur Lorenzo seperti dikutip dari Crash Lorenzo saat ini diliputi rasa penasaran yang besar untuk menunggang sepeda motor Desmosedici yang dipakai tim Ducati karena ia baru sekali menjajal motor tersebut pada sesi tes di Valencia, usai MotoGP musim 2016 berakhir. “Saya sangat tertarik dengan Ducati arena saya hanya memiliki kesempatan mencoba motor itu di Valencia dua hari setelah musim berakhir. Setelah itu saya tak boleh lagi menjajalnya hingga akhir Januari mendatang. Jadi saya menjalani penantian selama dua bulan yang panjang,” kata pebalap asal Spanyol ini. Dengan kondisi tersebut, maka Lorenzo memanfaatkan waktu yang ada untuk liburan dan melepaskan penat. “Setidaknya apa yang terjadi pada saya saat ini sangat bagus karena saya jadi memiliki waktu bebas dan sedikit liburan.” “Namun tentunya saya tak akan larut dalam liburan karena saya harus lebih bersiap, terutama dalam kondisi fisik dibandingkan sebelumnya, karena saya akan menunggangi motor yang sulit dikendarai,” ucap Lorenzo. Selama sembilan musim bersama Yamaha, Lorenzo sendiri sudah tiga kali jadi juara dunia, yaitu pada 2010, 2012, dan 2015. (kid)'} ``` ### Data Fields - `id`: id of the sample - `url`: the url to the original article - `date`: the publishing date of the article - `title`: the title of the article - `content`: the content of the article ### Data Splits The dataset contains train set of 499164 samples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer. ### Citation Information [N/A] ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
irc_disentangle
2022-11-18T20:10:09.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "conversation-disentanglement", "arxiv:1810.11118", "region:us"...
null
Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context.
@inproceedings{kummerfeld-etal-2019-large, title = "A Large-Scale Corpus for Conversation Disentanglement", author = "Kummerfeld, Jonathan K. and Gouravajhala, Sai R. and Peper, Joseph J. and Athreya, Vignesh and Gunasekara, Chulaka and Ganhotra, Jatin and Patel, Siva Sankalp and Polymenakos, Lazaros C and Lasecki, Walter", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1374", doi = "10.18653/v1/P19-1374", pages = "3846--3856", arxiv = "https://arxiv.org/abs/1810.11118", software = "https://jkk.name/irc-disentanglement", data = "https://jkk.name/irc-disentanglement", abstract = "Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89% of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.", }
null
4
4
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: [] paperswithcode_id: irc-disentanglement pretty_name: IRC Disentanglement tags: - conversation-disentanglement dataset_info: - config_name: ubuntu features: - name: id dtype: int32 - name: raw dtype: string - name: ascii dtype: string - name: tokenized dtype: string - name: date dtype: string - name: connections sequence: int32 splits: - name: train num_bytes: 56012854 num_examples: 220616 - name: validation num_bytes: 3081479 num_examples: 12510 - name: test num_bytes: 3919900 num_examples: 15010 download_size: 118470210 dataset_size: 63014233 - config_name: channel_two features: - name: id dtype: int32 - name: raw dtype: string - name: ascii dtype: string - name: tokenized dtype: string - name: connections sequence: int32 splits: - name: dev num_bytes: 197505 num_examples: 1001 - name: pilot num_bytes: 92663 num_examples: 501 - name: test num_bytes: 186823 num_examples: 1001 - name: pilot_dev num_bytes: 290175 num_examples: 1501 - name: all_ num_bytes: 496524 num_examples: 2602 download_size: 118470210 dataset_size: 1263690 --- # Dataset Card for IRC Disentanglement ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) - [Acknowledgments](#acknowledgments) ## Dataset Description - **Homepage:** https://jkk.name/irc-disentanglement/ - **Repository:** https://github.com/jkkummerfeld/irc-disentanglement/tree/master/data - **Paper:** https://aclanthology.org/P19-1374/ - **Leaderboard:** NA - **Point of Contact:** jkummerf@umich.edu ### Dataset Summary Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. Note, the Github repository for the dataset also contains several useful tools for: - Conversion (e.g. extracting conversations from graphs) - Evaluation - Preprocessing - Word embeddings trained on the full Ubuntu logs in 2018 ### Supported Tasks and Leaderboards Conversational Disentanglement ### Languages English (en) ## Dataset Structure ### Data Instances For Ubuntu: data["train"][1050] ``` { 'ascii': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)", 'connections': [1048, 1054, 1055, 1072, 1073], 'date': '2004-12-25', 'id': 1050, 'raw': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)", 'tokenized': "<s> ( also , i 'm guessing that this is n't a good place to report minor but annoying bugs ... what is ?) </s>" } ``` For Channel_two: data["train"][50] ``` { 'ascii': "[01:04] <Felicia> Chanel: i don't know off hand sorry", 'connections': [49, 53], 'id': 50, 'raw': "[01:04] <Felicia> Chanel: i don't know off hand sorry", 'tokenized': "<s> <user> : i do n't know off hand sorry </s>" } ``` ### Data Fields 'id' : The id of the message, this is the value that would be in the 'connections' of associated messages. 'raw' : The original message from the IRC log, as downloaded. 'ascii' : The raw message converted to ascii (unconvertable characters are replaced with a special word). 'tokenized' : The same message with automatic tokenisation and replacement of rare words with placeholder symbols. 'connections' : The indices of linked messages. (only ubuntu) 'date' : The date the messages are from. The labelling for each date only start after the first 1000 messages of that date. ### Data Splits The dataset has 4 parts: | Part | Number of Annotated Messages | | ------------- | ------------------------------------------- | | Train | 67,463 | | Dev | 2,500 | | Test | 5,000 | | Channel 2 | 2,600 | ## Dataset Creation ### Curation Rationale IRC is a synchronous chat setting with a long history of use. Several channels log all messages and make them publicly available. The Ubuntu channel is particularly heavily used and has been the subject of several academic studies. Data was selected from the channel in order to capture the diversity of situations in the channel (e.g. when there are many users or very few users). For full details, see the [annotation information page](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/data/READ.history.md). ### Source Data #### Initial Data Collection and Normalization Data was collected from the Ubuntu IRC channel logs, which are publicly available at [https://irclogs.ubuntu.com/](https://irclogs.ubuntu.com/). The raw files are included, as well as two other versions: - ASCII, converted using the script [make_txt.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/make-txt.py) - Tok, tokenised text with rare words replaced by UNK using the script [dstc8-tokenise.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/dstc8-tokenise.py) The raw channel two data is from prior work [(Elsner and Charniak, 2008)](https://www.aclweb.org/anthology/P08-1095.pdf)]. #### Who are the source language producers? The text is from a large group of internet users asking questions and providing answers related to Ubuntu. ### Annotations #### Annotation process The data is expert annotated with: - Training, one annotation per line in general, a small portion is double-annotated and adjudicated - Dev, Channel 2, double annotated and adjudicated - Test, triple annotated and adjudicated | Part | Annotators | Adjudication? | | ------------- | --------------- | ------------------------------------- | | Train | 1 or 2 per file | For files with 2 annotators (only 10) | | Dev | 2 | Yes | | Test | 3 | Yes | | Channel 2 | 2 | Yes | #### Who are the annotators? Students and a postdoc at the University of Michigan. Everyone involved went through a training process with feedback to learn the annotation guidelines. ### Personal and Sensitive Information No content is removed or obfuscated. There is probably personal information in the dataset from users. ## Considerations for Using the Data ### Social Impact of Dataset The raw data is already available online and the annotations do not significantly provide additional information that could have a direct social impact. ### Discussion of Biases The data is mainly from a single technical domain (Ubuntu tech support) that probably has a demographic skew of some sort. Given that users are only identified by their self-selected usernames, it is difficult to know more about the authors. ### Other Known Limitations Being focused on a single language and a single channel means that the data is likely capturing a particular set of conventions in communication. Those conventions may not apply to other channels, or beyond IRC. ## Additional Information ### Dataset Curators Jonathan K. Kummerfeld ### Licensing Information Creative Commons Attribution 4.0 ### Citation Information ``` @inproceedings{kummerfeld-etal-2019-large, title = "A Large-Scale Corpus for Conversation Disentanglement", author = "Kummerfeld, Jonathan K. and Gouravajhala, Sai R. and Peper, Joseph J. and Athreya, Vignesh and Gunasekara, Chulaka and Ganhotra, Jatin and Patel, Siva Sankalp and Polymenakos, Lazaros C and Lasecki, Walter", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1374", doi = "10.18653/v1/P19-1374", pages = "3846--3856", arxiv = "https://arxiv.org/abs/1810.11118", software = "https://jkk.name/irc-disentanglement", data = "https://jkk.name/irc-disentanglement", abstract = "Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89{\%} of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.", } ``` ### Contributions Thanks to [@dhruvjoshi1998](https://github.com/dhruvjoshi1998) for adding this dataset. Thanks to [@jkkummerfeld](https://github.com/jkkummerfeld) for improvements to the documentation. ### Acknowledgments This material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of IBM.
linnaeus
2023-06-15T14:40:39.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
null
A novel corpus of full-text documents manually annotated for species mentions.
@article{gerner2010linnaeus, title={LINNAEUS: a species name identification system for biomedical literature}, author={Gerner, Martin and Nenadic, Goran and Bergman, Casey M}, journal={BMC bioinformatics}, volume={11}, number={1}, pages={85}, year={2010}, publisher={Springer} }
null
1
4
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: linnaeus pretty_name: LINNAEUS dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B '2': I config_name: linnaeus splits: - name: train num_bytes: 4772417 num_examples: 11936 - name: validation num_bytes: 1592823 num_examples: 4079 - name: test num_bytes: 2802877 num_examples: 7143 download_size: 18204624 dataset_size: 9168117 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [linnaeus](http://linnaeus.sourceforge.net/) - **Repository:** https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/linnaeus-IOB - **Paper:** [BMC Bioinformatics](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-85) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The LINNAEUS corpus consists of 100 full-text documents from the PMCOA document set which were randomly selected. All mentions of species terms were manually annotated and normalized to the NCBI taxonomy IDs of the intended species. The original LINNAEUS corpus is available in a TAB-separated standoff format. The resource does not define training, development or test subsets. We converted the corpus into BioNLP shared task standoff format using a custom script, split it into 50-, 17-, and 33- document training, development and test sets, and then converted these into the CoNLL format using standoff2conll. As a full-text corpus, LINNAEUS contains comparatively frequent non-ASCII characters, which were mapped to ASCII using the standoff2conll -a option. The conversion was highly accurate, but due to sentence-splitting errors within entity mentions, the number of annotations in the converted data was larger by four (100.09%) than that in the source data. 99.77% of names in the original annotation matched names in the converted data. ### Supported Tasks and Leaderboards This dataset is used for species Named Entity Recognition. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances An example from the dataset is: ``` {'id': '2', 'tokens': ['Scp160p', 'is', 'a', '160', 'kDa', 'protein', 'in', 'the', 'yeast', 'Saccharomyces', 'cerevisiae', 'that', 'contains', '14', 'repeats', 'of', 'the', 'hnRNP', 'K', '-', 'homology', '(', 'KH', ')', 'domain', ',', 'and', 'demonstrates', 'significant', 'sequence', 'homology', 'to', 'a', 'family', 'of', 'proteins', 'collectively', 'known', 'as', 'vigilins', '.'], 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]} ``` ### Data Fields - `id`: Sentence identifier. - `tokens`: Array of tokens composing a sentence. - `ner_tags`: Array of tags, where `0` indicates no species mentioned, `1` signals the first token of a species and `2` the subsequent tokens of the species. ### Data Splits | name |train|validation|test| |----------|----:|---------:|---:| | linnaeus |11936| 4079|7143| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This version of the dataset is licensed under [Creative Commons Attribution 4.0 International](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/blob/master/LICENSE.md). ### Citation Information ```bibtex @article{crichton2017neural, title={A neural network multi-task learning approach to biomedical named entity recognition}, author={Crichton, Gamal and Pyysalo, Sampo and Chiu, Billy and Korhonen, Anna}, journal={BMC Bioinformatics}, volume={18}, number={1}, pages={368}, year={2017}, publisher={BioMed Central} doi = {10.1186/s12859-017-1776-8}, issn = {1471-2105}, url = {https://doi.org/10.1186/s12859-017-1776-8}, } @article{Gerner2010, author = {Gerner, Martin and Nenadic, Goran and Bergman, Casey M}, doi = {10.1186/1471-2105-11-85}, issn = {1471-2105}, journal = {BMC Bioinformatics}, number = {1}, pages = {85}, title = {{LINNAEUS: A species name identification system for biomedical literature}}, url = {https://doi.org/10.1186/1471-2105-11-85}, volume = {11}, year = {2010} } ``` ### Contributions Thanks to [@edugp](https://github.com/edugp) for adding this dataset.
ms_terms
2022-11-03T16:08:00.000Z
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:af", "language:am", "language:ar", "language:as", "la...
null
The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products. It can also be used to integrate Microsoft terminology into other terminology collections or serve as a base IT glossary for language development in the nearly 100 languages available. Terminology is provided in .tbx format, an industry standard for terminology exchange.
null
null
2
4
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - af - am - ar - as - az - be - bg - bn - bs - ca - chr - cs - cy - da - de - el - en - es - et - eu - fa - fi - fil - fr - ga - gd - gl - gu - guc - ha - he - hi - hr - hu - hy - id - ig - is - it - iu - ja - ka - kk - km - kn - knn - ko - ku - ky - lb - lo - lt - lv - mi - mk - ml - mn - mr - ms - mt - nb - ne - nl - nn - ory - pa - pl - prs - pst - pt - qu - quc - ro - ru - rw - sd - si - sk - sl - sq - sr - st - sv - swh - ta - te - tg - th - ti - tk - tn - tr - tt - ug - uk - ur - uz - vi - wo - xh - yo - zh - zu language_bcp47: - bn-IN - bs-Latn - es-MX - fr-CA - ms-BN - pt-BR - sr-BH - sr-Latn - zh-Hant-HK - zh-Hant-TW license: - ms-pl multilinguality: - multilingual - translation size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: MsTerms dataset_info: features: - name: entry_id dtype: string - name: term_source dtype: string - name: pos dtype: string - name: definition dtype: string - name: term_target dtype: string splits: - name: train num_bytes: 6995497 num_examples: 33738 download_size: 0 dataset_size: 6995497 --- # Dataset Card for [ms_terms] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Microsoft Terminology Collection](https://www.microsoft.com/en-us/language/terminology) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products. It can also be used to integrate Microsoft terminology into other terminology collections or serve as a base IT glossary for language development in the nearly 100 languages available. Terminology is provided in .tbx format, an industry standard for terminology exchange. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Nearly 100 Languages. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@leoxzhao](https://github.com/leoxzhao), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
norwegian_ner
2023-01-25T14:41:45.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:no", "license:unknown", "region:us" ]
null
Named entities Recognition dataset for Norwegian. It is a version of the Universal Dependency (UD) Treebank for both Bokmål and Nynorsk (UDN) where all proper nouns have been tagged with their type according to the NER tagging scheme. UDN is a converted version of the Norwegian Dependency Treebank into the UD scheme.
@inproceedings{johansen2019ner, title={Named-Entity Recognition for Norwegian}, author={Johansen, Bjarte}, booktitle={Proceedings of the 22nd Nordic Conference on Computational Linguistics, NoDaLiDa}, year={2019} }
null
0
4
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - 'no' license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Norwegian NER dataset_info: - config_name: bokmaal features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: pos_tags sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': ADV '14': INTJ '15': VERB '16': AUX - name: ner_tags sequence: class_label: names: '0': O '1': B-OTH '2': I-OTH '3': E-OTH '4': S-OTH '5': B-ORG '6': I-ORG '7': E-ORG '8': S-ORG '9': B-PRS '10': I-PRS '11': E-PRS '12': S-PRS '13': B-GEO '14': I-GEO '15': E-GEO '16': S-GEO splits: - name: train num_bytes: 9859760 num_examples: 15696 - name: validation num_bytes: 1475216 num_examples: 2410 - name: test num_bytes: 1212939 num_examples: 1939 download_size: 8747760 dataset_size: 12547915 - config_name: nynorsk features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: pos_tags sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': ADV '14': INTJ '15': VERB '16': AUX - name: ner_tags sequence: class_label: names: '0': O '1': B-OTH '2': I-OTH '3': E-OTH '4': S-OTH '5': B-ORG '6': I-ORG '7': E-ORG '8': S-ORG '9': B-PRS '10': I-PRS '11': E-PRS '12': S-PRS '13': B-GEO '14': I-GEO '15': E-GEO '16': S-GEO splits: - name: train num_bytes: 9916338 num_examples: 14174 - name: validation num_bytes: 1257235 num_examples: 1890 - name: test num_bytes: 1006733 num_examples: 1511 download_size: 8484545 dataset_size: 12180306 - config_name: samnorsk features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: pos_tags sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': ADV '14': INTJ '15': VERB '16': AUX - name: ner_tags sequence: class_label: names: '0': O '1': B-OTH '2': I-OTH '3': E-OTH '4': S-OTH '5': B-ORG '6': I-ORG '7': E-ORG '8': S-ORG '9': B-PRS '10': I-PRS '11': E-PRS '12': S-PRS '13': B-GEO '14': I-GEO '15': E-GEO '16': S-GEO splits: - name: train num_bytes: 22508485 num_examples: 34170 - name: validation num_bytes: 2732419 num_examples: 4300 - name: test num_bytes: 2219640 num_examples: 3450 download_size: 19133049 dataset_size: 27460544 --- # Dataset Card for Norwegian NER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/ljos/navnkjenner) - **Repository:** [Github](https://github.com/ljos/navnkjenner) - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@jplu](https://github.com/jplu) for adding this dataset.
offcombr
2023-01-25T14:41:55.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pt", "license:unknown", "hate-speech-detection", "region:us" ]
null
OffComBR: an annotated dataset containing for hate speech detection in Portuguese composed of news comments on the Brazilian Web.
@article{Pelle2017, title={Offensive Comments in the Brazilian Web: a dataset and baseline results}, author={Rogers P. de Pelle and Viviane P. Moreira}, booktitle={6th Brazilian Workshop on Social Network Analysis and Mining (BraSNAM)}, year={2017}, }
null
4
4
--- annotations_creators: - expert-generated language_creators: - found language: - pt license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: [] paperswithcode_id: offcombr pretty_name: Offensive Comments in the Brazilian Web tags: - hate-speech-detection dataset_info: - config_name: offcombr-2 features: - name: label dtype: class_label: names: '0': 'no' '1': 'yes' - name: text dtype: string splits: - name: train num_bytes: 105703 num_examples: 1250 download_size: 99956 dataset_size: 105703 - config_name: offcombr-3 features: - name: label dtype: class_label: names: '0': 'no' '1': 'yes' - name: text dtype: string splits: - name: train num_bytes: 90094 num_examples: 1033 download_size: 85215 dataset_size: 90094 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://www.inf.ufrgs.br/~rppelle/hatedetector/ - **Repository:** https://github.com/rogersdepelle/OffComBR - **Paper:** https://sol.sbc.org.br/index.php/brasnam/article/view/3260/3222 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary OffComBR: an annotated dataset containing for hate speech detection in Portuguese composed of news comments on the Brazilian Web. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
omp
2023-01-25T14:42:05.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "license:cc-by-nc-sa-4.0", "region:us" ]
null
The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). DER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper’s website, there is a discussion section below each news article where readers engage in online discussions. The data set contains a selection of user posts from the 12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and 1,000,000 unlabeled posts in the data set. The labeled posts were annotated by professional forum moderators employed by the newspaper. The data set contains the following data for each post: * Post ID * Article ID * Headline (max. 250 characters) * Main Body (max. 750 characters) * User ID (the user names used by the website have been re-mapped to new numeric IDs) * Time stamp * Parent post (replies give rise to tree-like discussion thread structures) * Status (online or deleted by a moderator) * Number of positive votes by other community members * Number of negative votes by other community members For each article, the data set contains the following data: * Article ID * Publishing date * Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1) * Title * Body Detailed descriptions of the post selection and annotation procedures are given in the paper. ## Annotated Categories Potentially undesirable content: * Sentiment (negative/neutral/positive) An important goal is to detect changes in the prevalent sentiment in a discussion, e.g., the location within the fora and the point in time where a turn from positive/neutral sentiment to negative sentiment takes place. * Off-Topic (yes/no) Posts which digress too far from the topic of the corresponding article. * Inappropriate (yes/no) Swearwords, suggestive and obscene language, insults, threats etc. * Discriminating (yes/no) Racist, sexist, misogynistic, homophobic, antisemitic and other misanthropic content. Neutral content that requires a reaction: * Feedback (yes/no) Sometimes users ask questions or give feedback to the author of the article or the newspaper in general, which may require a reply/reaction. Potentially desirable content: * Personal Stories (yes/no) In certain fora, users are encouraged to share their personal stories, experiences, anecdotes etc. regarding the respective topic. * Arguments Used (yes/no) It is desirable for users to back their statements with rational argumentation, reasoning and sources.
@InProceedings{Schabus2017, Author = {Dietmar Schabus and Marcin Skowron and Martin Trapp}, Title = {One Million Posts: A Data Set of German Online Discussions}, Booktitle = {Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)}, Pages = {1241--1244}, Year = {2017}, Address = {Tokyo, Japan}, Doi = {10.1145/3077136.3080711}, Month = aug }
null
1
4
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - de license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: one-million-posts-corpus pretty_name: One Million Posts dataset_info: - config_name: posts_labeled features: - name: ID_Post dtype: string - name: ID_Parent_Post dtype: string - name: ID_Article dtype: string - name: ID_User dtype: string - name: CreatedAt dtype: string - name: Status dtype: string - name: Headline dtype: string - name: Body dtype: string - name: PositiveVotes dtype: int32 - name: NegativeVotes dtype: int32 - name: Category dtype: class_label: names: '0': ArgumentsUsed '1': Discriminating '2': Inappropriate '3': OffTopic '4': PersonalStories '5': PossiblyFeedback '6': SentimentNegative '7': SentimentNeutral '8': SentimentPositive - name: Value dtype: int32 - name: Fold dtype: int32 splits: - name: train num_bytes: 13955964 num_examples: 40567 download_size: 1329892 dataset_size: 13955964 - config_name: posts_unlabeled features: - name: ID_Post dtype: string - name: ID_Parent_Post dtype: string - name: ID_Article dtype: string - name: ID_User dtype: string - name: CreatedAt dtype: string - name: Status dtype: string - name: Headline dtype: string - name: Body dtype: string - name: PositiveVotes dtype: int32 - name: NegativeVotes dtype: int32 splits: - name: train num_bytes: 305770324 num_examples: 1000000 download_size: 79296188 dataset_size: 305770324 - config_name: articles features: - name: ID_Article dtype: string - name: Path dtype: string - name: publishingDate dtype: string - name: Title dtype: string - name: Body dtype: string splits: - name: train num_bytes: 43529400 num_examples: 12087 download_size: 10681288 dataset_size: 43529400 --- # Dataset Card for One Million Posts Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ofai.github.io/million-post-corpus/ - **Repository:** https://github.com/OFAI/million-post-corpus - **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). DER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper’s website, there is a discussion section below each news article where readers engage in online discussions. The data set contains a selection of user posts from the 12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and 1,000,000 unlabeled posts in the data set. The labeled posts were annotated by professional forum moderators employed by the newspaper. The data set contains the following data for each post: * Post ID * Article ID * Headline (max. 250 characters) * Main Body (max. 750 characters) * User ID (the user names used by the website have been re-mapped to new numeric IDs) * Time stamp * Parent post (replies give rise to tree-like discussion thread structures) * Status (online or deleted by a moderator) * Number of positive votes by other community members * Number of negative votes by other community members For each article, the data set contains the following data: * Article ID * Publishing date * Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1) * Title * Body Detailed descriptions of the post selection and annotation procedures are given in the paper. #### Annotated Categories Potentially undesirable content: * Sentiment (negative/neutral/positive) An important goal is to detect changes in the prevalent sentiment in a discussion, e.g., the location within the fora and the point in time where a turn from positive/neutral sentiment to negative sentiment takes place. * Off-Topic (yes/no) Posts which digress too far from the topic of the corresponding article. * Inappropriate (yes/no) Swearwords, suggestive and obscene language, insults, threats etc. * Discriminating (yes/no) Racist, sexist, misogynistic, homophobic, antisemitic and other misanthropic content. Neutral content that requires a reaction: * Feedback (yes/no) Sometimes users ask questions or give feedback to the author of the article or the newspaper in general, which may require a reply/reaction. Potentially desirable content: * Personal Stories (yes/no) In certain fora, users are encouraged to share their personal stories, experiences, anecdotes etc. regarding the respective topic. * Arguments Used (yes/no) It is desirable for users to back their statements with rational argumentation, reasoning and sources. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Austrian German ## Dataset Structure ### Data Instances An example from the `posts_labeled` config: ```json { "ID_Post": "79", "ID_Parent_Post": "", "ID_Article": "1", "ID_User": "12071", "CreatedAt": "2015-06-01 08:58:32.363", "Status": "online", "Headline": "", "Body": "ich kann keinen hinweis finden, wo man sich hinwenden muss, sollte man als abonnent des standard, die zeitung nicht bekommt, ist dass bewusst so arrangiert?", "PositiveVotes": 0, "NegativeVotes": 0, "Category": 5, "Value": 1, "Fold": 1 } ``` An example from the `posts_unlabeled` config: ```json { "ID_Post": "51", "ID_Parent_Post": "", "ID_Article": "1", "ID_User": "11125", "CreatedAt": "2011-05-15 08:37:11.313", "Status": "online", "Headline": "Ich würde es sehr begrüßen, wenn", "Body": "Antworten erst beim Erscheinen als e-Mail dem Poster zugestellt würden.\r\n\r\nEs gibt User, die ihre Kommentare sofort nach Mail-Eingang irgendwo hinposten. Dadurch wird \r\n1. vor allem für andere Unser die Lesbarkeit wesentlich beeinträchtigt,\r\n2. kann das Post verdreht wiedergegeben werden,\r\n3. man ist immer wieder gezwungen die Antwort richtig zu stellen.\r\n\r\nPrivatfehden von Usern sollten, wenn schon zugelassen, für alle User nachvollziehbar sein.\r\n\r\nDanke!", "PositiveVotes": 1, "NegativeVotes": 0 } ``` An example from the `articles` config: ```json { "ID_Article": "41", "Path": "Newsroom/Wirtschaft/Wirtschaftpolitik/Energiemarkt", "publishingDate": "2015-06-01 12:39:35.00", "Title": "Öl- und Gas-Riesen fordern weltweite CO2-Preise", "Body": '<div class="section" id="content-main" itemprop="articleBody"><div class="copytext"><h2 itemprop="description">Brief von BP, Total, Shell, Statoil, BG Group und Eni unterzeichnet</h2><p>Paris/London/La Defense - Sechs große Öl- und Gaskonzerne haben mit Blick auf die Verhandlungen über einen neuen Welt-Klimavertrag ein globales Preissystem für CO2-Emissionen gefordert. Wenn der Ausstoß von CO2 Geld kostet, sei dies ein Anreiz für die Nutzung von Erdgas statt Kohle, mehr Energieeffizienz und Investitionen zur Vermeidung des Treibhausgases, heißt es in einem am Montag veröffentlichten Brief.</p>\n<p>Das Schreiben ist unterzeichnet von BP, Total, Shell, Statoil, BG Group und Eni. Die Unternehmen versicherten, sie seien bereit, ihren Teil zum Kampf gegen den <a href="/r1937/Klimawandel">Klimawandel</a> beizutragen. Dafür sei aber ein klarer und verlässlicher Politik-Rahmen nötig. (APA, 1.6.2015)</p> </div></div>' } ``` ### Data Fields The data set contains the following data for each post: * **ID_Post**: Post ID * **ID_Parent_Post**: Parent post (replies give rise to tree-like discussion thread structures) * **ID_Article**: Article ID * **ID_User**: User ID (the user names used by the website have been re-mapped to new numeric IDs) * **Headline**: Headline (max. 250 characters) * **Body**: Main Body (max. 750 characters) * **CreatedAt**: Time stamp * **Status**: Status (online or deleted by a moderator) * **PositiveVotes**: Number of positive votes by other community members * **NegativeVotes**: Number of negative votes by other community members Labeled posts also contain: * **Category**: The category of the annotation, one of: ArgumentsUsed, Discriminating, Inappropriate, OffTopic, PersonalStories, PossiblyFeedback, SentimentNegative, SentimentNeutral, SentimentPositive * **Value**: either 0 or 1, explicitly indicating whether or not the post has the specified category as a label (i.e. a category of `ArgumentsUsed` with value of `0` means that an annotator explicitly labeled that this post doesn't use arguments, as opposed to the mere absence of a positive label). * **Fold**: a number between [0-9] from a 10-fold split by the authors For each article, the data set contains the following data: * **ID_Article**: Article ID * **publishingDate**: Publishing date * **Path**: Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1) * **Title**: Title * **Body**: Body ### Data Splits Training split only. | name | train | |-----------------|--------:| | posts_labeled | 40567 | | posts_unlabeled | 1000000 | | articles | 12087 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This data set is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. ### Citation Information ``` @InProceedings{Schabus2018, author = {Dietmar Schabus and Marcin Skowron}, title = {Academic-Industrial Perspective on the Development and Deployment of a Moderation System for a Newspaper Website}, booktitle = {Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC)}, year = {2018}, address = {Miyazaki, Japan}, month = may, pages = {1602-1605}, abstract = {This paper describes an approach and our experiences from the development, deployment and usability testing of a Natural Language Processing (NLP) and Information Retrieval system that supports the moderation of user comments on a large newspaper website. We highlight some of the differences between industry-oriented and academic research settings and their influence on the decisions made in the data collection and annotation processes, selection of document representation and machine learning methods. We report on classification results, where the problems to solve and the data to work with come from a commercial enterprise. In this context typical for NLP research, we discuss relevant industrial aspects. We believe that the challenges faced as well as the solutions proposed for addressing them can provide insights to others working in a similar setting.}, url = {http://www.lrec-conf.org/proceedings/lrec2018/summaries/8885.html}, } ``` ### Contributions Thanks to [@aseifert](https://github.com/aseifert) for adding this dataset.
pass
2022-11-03T16:15:51.000Z
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|yffc100M", "language:en", "license:cc-by-4.0", "image-self-supervised pre...
null
PASS (Pictures without humAns for Self-Supervision) is a large-scale dataset of 1,440,191 images that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns. The PASS images are sourced from the YFCC-100M dataset.
@Article{asano21pass, author = "Yuki M. Asano and Christian Rupprecht and Andrew Zisserman and Andrea Vedaldi", title = "PASS: An ImageNet replacement for self-supervised pretraining without humans", journal = "NeurIPS Track on Datasets and Benchmarks", year = "2021" }
null
1
4
--- annotations_creators: - no-annotation language_creators: - machine-generated - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - extended|yffc100M task_categories: - other task_ids: [] paperswithcode_id: pass pretty_name: Pictures without humAns for Self-Supervision tags: - image-self-supervised pretraining dataset_info: features: - name: image dtype: image - name: creator_username dtype: string - name: hash dtype: string - name: gps_latitude dtype: float32 - name: gps_longitude dtype: float32 - name: date_taken dtype: timestamp[us] splits: - name: train num_bytes: 178563446100 num_examples: 1439588 download_size: 179640190811 dataset_size: 178563446100 --- # Dataset Card for PASS ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [PASS homepage](https://www.robots.ox.ac.uk/~vgg/research/pass/) - **Repository:** [PASS repository](https://github.com/yukimasano/PASS) - **Paper:** [PASS: An ImageNet replacement for self-supervised pretraining without humans](https://arxiv.org/abs/2109.13228) - **Leaderboard:** [Pretrained models with scores](https://github.com/yukimasano/PASS#pretrained-models) - **Point of Contact:** [Yuki M. Asano](mailto:yukiATMARKrobots.ox.ac.uk) ### Dataset Summary PASS is a large-scale image dataset, containing 1.4 million images, that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns. ### Supported Tasks and Leaderboards From the paper: > **Has the dataset been used for any tasks already?** In the paper we show and benchmark the intended use of this dataset as a pretraining dataset. For this the dataset is used an unlabelled image collection on which visual features are learned and then transferred to downstream tasks. We show that with this dataset it is possible to learn competitive visual features, without any humans in the pretraining dataset and with complete license information. > **Is there a repository that links to any or all papers or systems that use the dataset?** We will be listing these at the repository. > **What (other) tasks could the dataset be used for?** We believe this dataset might allow researchers and practitioners to further evaluate the differences that pretraining datasets can have on the learned features. Furthermore, since the meta-data is available for the images, it is possible to investigate the effect of image resolution on self-supervised learning methods, a domain largely underresearched thus far, as the current de-facto standard, ImageNet, only comes in one size. > **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** Given that this dataset is a subset of a dataset that randomly samples images from flickr, the image distribution is biased towards European and American creators. As in the main papers discussion, this can lead to non-generalizeable features, or even biased features as the images taken in other countries might be more likely to further reflect and propagate stereotypes [84], though in our case these do not refer to sterotypes about humans. > **Are there tasks for which the dataset should not be used?** This dataset is meant for research purposes only. The dataset should also not be used for, e.g. connecting images and usernames, as this might risk de-anonymising the dataset in the long term. The usernames are solely provided for attribution. ### Languages English. ## Dataset Structure ### Data Instances A data point comprises an image and its meta-data: ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FFAD48E35F8>, 'creator_username': 'NTShieldsy', 'hash': 'e1662344ffa8c231d198c367c692cc', 'gps_latitude': 21.206675, 'gps_longitude': 39.166558, 'date_taken': datetime.datetime(2012, 8, 9, 18, 0, 20) } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `creator_username`: The photographer. - `hash`: The hash, as computed from YFCC-100M. - `gps_latitude`: Latitude of image if existent, otherwise None. - `gps_longitude`: Longitude of image if existent, otherwise None. - `date_taken`: Datetime of image if existent, otherwise None. ### Data Splits All the data is contained in the training set. The training set has 1,439,588 instances as this implementation corresponds to the most recent release (v3) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt). From the paper: > **Are there recommended data splits (e.g., training, development/validation, testing)?** As outlined in the intended usecases, this dataset is meant for pretraining representations. As such, the models derived from training on this dataset need to be evaluated on different datasets, so called down-stream tasks. Thus the recommended split is to use all samples for training. ## Dataset Creation ### Curation Rationale From the paper: > **For what purpose was the dataset created?** Neural networks pretrained on large image collections have been shown to transfer well to other visual tasks where there is little labelled data, i.e. transferring a model works better than starting with a randomly initialized network every time for a new task, as many visual features can be repurposed. This dataset has as its goal to provide a safer large-scale dataset for such pretraining of visual features. In particular, this dataset does not contain any humans or human parts and does not contain any labels. The first point is important, as the current standard for pretraining, ImageNet and its face-blurred version only provide pseudo-anonymity and furthermore do not provide correct licences to the creators. The second point is relevant as pretraining is moving towards the self-supervised paradigm, where labels are not required. Yet most methods are developed on the highly curated ImageNet dataset, yielding potentially non-generalizeable research. ### Source Data #### Initial Data Collection and Normalization From the paper: * **Collection process**: > **How was the data associated with each instance acquired?** The data was collected from the publicly available dataset YFCC-100M which is hosted on the AWS public datasets platform. We have used the meta-data, namely the copyright information to filter only images with the CC-BY licence and have downloaded these using the aws command line interface, allowing for quick and stable downloading. In addition, all files were subsequently scanned for viruses using Sophos SAVScan virus detection utility, v.5.74.0. > **What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?** Our dataset is a subset of the YFCC-100M dataset. The YFCC-100M dataset itself was created by effectively randomly selecting publicly available images from flickr, resulting in approximately 98M images. > **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?** The dataset is a sample of a larger set—all possible digital photographs. As outlined in Section 3 we start from an existing dataset, YFCC-100M, and stratify the images (removing images with people and personal information, removing images with harmful content, removing images with unsuitable licenses, each user contributes at most 80 images to the dataset). This leaves 1.6M images, out of which we take a random sample of 1.28M images to replicate the size of the ImageNet dataset. While this dataset can thus be extended, this is the set that we have verified to not contain humans, human parts and disturbing content. > **Over what timeframe was the data collected?** The images underlying the dataset were downloaded between March and June 2021 from the AWS public datasets’ S3 bucket, following the download code provided in the repo. However the images contained were originally and taken anywhere from 2000 to 2015, with the majority being shot between 2010-2014. * **Preprocessing/cleaning/labeling**: > **Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing,tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?** After the download of approx. 17M images, the corrupted, or single-color images were removed from the dataset prior to the generation of the dataset(s) used in the paper. The images were not further preprocessed or edited. > **Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?** Yes. The creators of the dataset maintain a copy of the 17M original images with the CC-BY licence of YFCC100M that sits at the start of our dataset creation pipeline. Is the software used to preprocess/clean/label the instances available? We have only used basic Python primitives for this. For the annotations we have used VIA [27, 28]. #### Who are the source language producers? From the paper: > **Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?** As described, the data was collected automatically by simply downloading images from a publicly hosted S3 bucket. The human verification was done using a professional data annotation company that pays 150% of the local minimum wage. ### Annotations #### Annotation process This dataset doesn't contain annotations. #### Who are the annotators? This dataset doesn't contain annotations. ### Personal and Sensitive Information From the paper: > **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?** No. > **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** No. Besides checking for human presence in the images, the annotators were also given the choice of flagging images for disturbing content, which once flagged was removed. > **Does the dataset relate to people? If not, you may skip the remaining questions in this section.** No. > **Does the dataset identify any subpopulations (e.g., by age, gender)?** NA > **Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?** NA > **Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?** NA > **Were any ethical review processes conducted (e.g., by an institutional review board)?** No ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases From the paper: > **Is your dataset free of biases?** No. There are many kinds of biases that can either be quantified, e.g. geo-location (most images originate from the US and Europe) or camera-model (most images are taken with professional DSLR cameras not easily affordable), there are likely many more biases that this dataset does contain. The only thing that this dataset does not contain are humans and parts of humans, as far as our validation procedure is accurate. ### Other Known Limitations From the paper: > **Can you guarantee compliance to GDPR?** No, we cannot comment on legal issues. ## Additional Information ### Dataset Curators YM. Asano, C. Rupprecht, A. Zisserman and A. Vedaldi. From the paper: > **Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?** The dataset has been constructed by the research group “Visual Geometry Group” at the University of Oxford at the Engineering Science Department. ### Licensing Information The PASS dataset is available to download for commercial/research purposes under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). A complete version of the license can be found [here](https://www.robots.ox.ac.uk/~vgg/research/pass/license_pass.txt). The whole dataset only contains CC-BY licensed images with full attribution information. ### Citation Information ```bibtex @Article{asano21pass, author = "Yuki M. Asano and Christian Rupprecht and Andrew Zisserman and Andrea Vedaldi", title = "PASS: An ImageNet replacement for self-supervised pretraining without humans", journal = "NeurIPS Track on Datasets and Benchmarks", year = "2021" } ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
polemo2
2023-01-25T14:42:43.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:bsd-3-clause", "region:us" ]
null
The PolEmo2.0 is a set of online reviews from medicine and hotels domains. The task is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation.
@inproceedings{kocon-etal-2019-multi, title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews", author = "Koco{\'n}, Jan and Milkowski, Piotr and Za{\'s}ko-Zieli{\'n}ska, Monika", booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/K19-1092", doi = "10.18653/v1/K19-1092", pages = "980--991", }
null
0
4
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - bsd-3-clause multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: polemo2 dataset_info: - config_name: in features: - name: sentence dtype: string - name: target dtype: class_label: names: '0': __label__meta_amb '1': __label__meta_minus_m '2': __label__meta_plus_m '3': __label__meta_zero splits: - name: train num_bytes: 4810215 num_examples: 5783 - name: test num_bytes: 582052 num_examples: 722 - name: validation num_bytes: 593530 num_examples: 723 download_size: 2350339 dataset_size: 5985797 - config_name: out features: - name: sentence dtype: string - name: target dtype: class_label: names: '0': __label__meta_amb '1': __label__meta_minus_m '2': __label__meta_plus_m '3': __label__meta_zero splits: - name: train num_bytes: 4810215 num_examples: 5783 - name: test num_bytes: 309790 num_examples: 494 - name: validation num_bytes: 310977 num_examples: 494 download_size: 2139891 dataset_size: 5430982 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://clarin-pl.eu/dspace/handle/11321/710 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The PolEmo2.0 is a set of online reviews from medicine and hotels domains. The task is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sentence: string, the review - target: sentiment of the sentence class The same tag system is used in plWordNet Emo for lexical units: [+m] (strong positive), [+s] (weak positive), [-m] (strong negative), [-s] (weak negative), [amb] (ambiguous) and [0] (neutral). Note that the test set doesn't have targets so -1 is used instead ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information [More Information Needed] ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
saudinewsnet
2023-07-17T08:18:44.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ar"...
null
The dataset contains a set of 31,030 Arabic newspaper articles alongwith metadata, extracted from various online Saudi newspapers and written in MSA.
@misc{hagrima2015, author = "M. Alhagri", title = "Saudi Newspapers Arabic Corpus (SaudiNewsNet)", year = 2015, url = "http://github.com/ParallelMazen/SaudiNewsNet" }
null
1
4
--- annotations_creators: - no-annotation language_creators: - found language: - ar license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: saudinewsnet dataset_info: features: - name: source dtype: string - name: url dtype: string - name: date_extracted dtype: string - name: title dtype: string - name: author dtype: string - name: content dtype: string splits: - name: train num_bytes: 103654105 num_examples: 31030 download_size: 29014166 dataset_size: 103654105 --- # Dataset Card for "saudinewsnet" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [SaudiNewsNet](https://github.com/parallelfold/SaudiNewsNet) - **Repository:** [Website](https://github.com/parallelfold/SaudiNewsNet) - **Paper:** [More Information Needed] - **Point of Contact:** [Mazen Abdulaziz](mailto:mazen.abdulaziz@gmail.com) - **Size of downloaded dataset files:** 29.01 MB - **Size of the generated dataset:** 103.65 MB - **Total amount of disk used:** 132.67 MB ### Dataset Summary The dataset contains a set of 31,030 Arabic newspaper articles alongwith metadata, extracted from various online Saudi newspapers and written in MSA. The dataset currently contains **31,030** Arabic articles (with a total number of **8,758,976 words**). The articles were extracted from the following Saudi newspapers (sorted by number of articles): - [Al-Riyadh](http://www.alriyadh.com/) (4,852 articles) - [Al-Jazirah](http://al-jazirah.com/) (3,690 articles) - [Al-Yaum](http://alyaum.com/) (3,065 articles) - [Al-Eqtisadiya](http://aleqt.com/) (2,964 articles) - [Al-Sharq Al-Awsat](http://aawsat.com/) (2,947 articles) - [Okaz](http://www.okaz.com.sa/) (2,846 articles) - [Al-Watan](http://alwatan.com.sa/) (2,279 articles) - [Al-Madina](http://www.al-madina.com/) (2,252 articles) - [Al-Weeam](http://alweeam.com.sa/) (2,090 articles) - [Ain Alyoum](http://3alyoum.com/) (2,080 articles) - [Sabq](http://sabq.org/) (1,411 articles) - [Saudi Press Agency](http://www.spa.gov.sa) (369 articles) - [Arreyadi](http://www.arreyadi.com.sa/) (133 articles) - [Arreyadiyah](http://www.arreyadiyah.com/) (52 articles) ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 29.01 MB - **Size of the generated dataset:** 103.65 MB - **Total amount of disk used:** 132.67 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "author": "الرياض: محمد الحميدي", "content": "\"في وقت تتهيأ فيه السعودية لإطلاق الإصدار الثاني من العملات المعدنية، لا تزال التداول بمبالغ النقود المصنوعة من المعدن مستقرة عن...", "date_extracted": "2015-07-22 01:18:37", "source": "aawsat", "title": "\"«العملة المعدنية» السعودية تسجل انحسارًا تاريخيًا وسط تهيؤ لإطلاق الإصدار الثاني\"...", "url": "\"http://aawsat.com/home/article/411671/«العملة-المعدنية»-السعودية-تسجل-انحسارًا-تاريخيًا-وسط-تهيؤ-لإطلاق-الإصدار-الثاني\"..." } ``` ### Data Fields The data fields are the same among all splits. - **`source`** (str): The source newspaper. - **`url`** (str): The full URL from which the article was extracted. - **`date_extracted`** (str): The timestamp of the date on which the article was extracted. It has the format `YYYY-MM-DD hh:mm:ss`. Notice that this field does not necessarily represent the date on which the article was authored (or made available online), however for articles stamped with a date of extraction after August 1, 2015, this field most probably represents the date of authoring. - **`title`** (str): The title of the article. Contains missing values that were replaced with an empty string. - **`author`** (str): The author of the article. Contains missing values that were replaced with an empty string. - **`content`** (str): The content of the article. ### Data Splits | name |train| |-------|----:| |default|31030| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data | String Identifier | Newspaper | | ------------------ | --------- | | aawsat | [Al-Sharq Al-Awsat](http://aawsat.com/) | | aleqtisadiya | [Al-Eqtisadiya](http://aleqt.com/) | | aljazirah | [Al-Jazirah](http://al-jazirah.com/) | | almadina | [Al-Madina](http://www.al-madina.com/) | | alriyadh | [Al-Riyadh](http://www.alriyadh.com/) | | alwatan | [Al-Watan](http://alwatan.com.sa/) | | alweeam | [Al-Weeam](http://alweeam.com.sa/) | | alyaum | [Al-Yaum](http://alyaum.com/) | | arreyadi | [Arreyadi](http://www.arreyadi.com.sa/) | | arreyadiyah | [Arreyadi](http://www.arreyadiyah.com/) | | okaz | [Okaz](http://www.okaz.com.sa/) | | sabq | [Sabq](http://sabq.org/) | | was | [Saudi Press Agency](http://www.spa.gov.sa/) | | 3alyoum | [Ain Alyoum](http://3alyoum.com/) | #### Initial Data Collection and Normalization The Modern Standard Arabic texts crawled from the Internet. #### Who are the source language producers? Newspaper Websites. ### Annotations The dataset does not contain any additional annotations. ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License ### Citation Information ``` @misc{hagrima2015, author = "M. Alhagri", title = "Saudi Newspapers Arabic Corpus (SaudiNewsNet)", year = 2015, url = "http://github.com/ParallelMazen/SaudiNewsNet" } ``` ### Contributions Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset.
tuple_ie
2022-11-03T16:31:04.000Z
[ "task_categories:other", "annotations_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "open-information-extraction", "region:us" ]
null
The TupleInf Open IE dataset contains Open IE tuples extracted from 263K sentences that were used by the solver in “Answering Complex Questions Using Open Information Extraction” (referred as Tuple KB, T). These sentences were collected from a large Web corpus using training questions from 4th and 8th grade as queries. This dataset contains 156K sentences collected for 4th grade questions and 107K sentences for 8th grade questions. Each sentence is followed by the Open IE v4 tuples using their simple format.
@article{Khot2017AnsweringCQ, title={Answering Complex Questions Using Open Information Extraction}, author={Tushar Khot and A. Sabharwal and Peter Clark}, journal={ArXiv}, year={2017}, volume={abs/1704.05572} }
null
1
4
--- annotations_creators: - found language_creators: - machine-generated language: - en license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: tupleinf-open-ie-dataset pretty_name: TupleInf Open IE tags: - open-information-extraction dataset_info: - config_name: all features: - name: sentence dtype: string - name: tuples sequence: - name: score dtype: float32 - name: tuple_text dtype: string - name: context dtype: string - name: arg1 dtype: string - name: rel dtype: string - name: arg2s sequence: string splits: - name: train num_bytes: 115621096 num_examples: 267719 download_size: 18026102 dataset_size: 115621096 - config_name: 4th_grade features: - name: sentence dtype: string - name: tuples sequence: - name: score dtype: float32 - name: tuple_text dtype: string - name: context dtype: string - name: arg1 dtype: string - name: rel dtype: string - name: arg2s sequence: string splits: - name: train num_bytes: 65363445 num_examples: 158910 download_size: 18026102 dataset_size: 65363445 - config_name: 8th_grade features: - name: sentence dtype: string - name: tuples sequence: - name: score dtype: float32 - name: tuple_text dtype: string - name: context dtype: string - name: arg1 dtype: string - name: rel dtype: string - name: arg2s sequence: string splits: - name: train num_bytes: 50257651 num_examples: 108809 download_size: 18026102 dataset_size: 50257651 --- # Dataset Card for TupleInf Open IE ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Tuple IE Homepage](https://allenai.org/data/tuple-ie) - **Repository:** - **Paper:** [Answering Complex Questions Using Open Information Extraction](https://www.semanticscholar.org/paper/Answering-Complex-Questions-Using-Open-Information-Khot-Sabharwal/0ff595f0645a3e25a2f37145768985b10ead0509) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The TupleInf Open IE dataset contains Open IE tuples extracted from 263K sentences that were used by the solver in “Answering Complex Questions Using Open Information Extraction” (referred as Tuple KB, T). These sentences were collected from a large Web corpus using training questions from 4th and 8th grade as queries. This dataset contains 156K sentences collected for 4th grade questions and 107K sentences for 8th grade questions. Each sentence is followed by the Open IE v4 tuples using their simple format. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English, collected from a large Web corpus using training questions from 4th and 8th grade as queries. ## Dataset Structure ### Data Instances This dataset contains setences with corresponding relation tuples extracted from each sentence. Each instance should contain a sentence and followed by the [Open IE v4](https://github.com/allenai/openie-standalone) tuples using their *simple format*. An example of an instance: ```JSON { "sentence": "0.04593 kg Used a triple beam balance to mass a golf ball.", "tuples": { "score": 0.8999999761581421, "tuple_text": "(0.04593 kg; Used; a triple beam balance; to mass a golf ball)", "context": "", "arg1": "0.04593 kg", "rel": "Used", "arg2s": ["a triple beam balance", "to mass a golf ball"], } } ``` ### Data Fields - `sentence`: the input text/sentence. - `tuples`: the extracted relation tuples from the sentence. - `score`: the confident score for each tuple. - `tuple_text`: the relationship representation text of the extraction, in the *simple format* of [Open IE v4](https://github.com/allenai/openie-standalone). - `context`: an optional representation of the context for this extraction. Defaults to `""` if there's no context. - `arg1`: the first argument in the relationship. - `rel`: the relation. - `arg2s`: a sequence of the 2nd arguments in the realtionship. ### Data Splits | name | train| |-----------|-----:| | all |267719| | 4th_grade |158910| | 8th_grade |108809| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @article{Khot2017AnsweringCQ, title={Answering Complex Questions Using Open Information Extraction}, author={Tushar Khot and A. Sabharwal and Peter Clark}, journal={ArXiv}, year={2017}, volume={abs/1704.05572} } ``` ### Contributions Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
wmt20_mlqe_task2
2023-06-01T14:59:47.000Z
[ "task_categories:translation", "task_categories:text-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:extended|wikipedia", "language:de", "langu...
null
This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task. Task 2 evaluates the application of QE for post-editing purposes. It consists of predicting: - A/ Word-level tags. This is done both on source side (to detect which words caused errors) and target side (to detect mistranslated or missing words). - A1/ Each token is tagged as either `OK` or `BAD`. Additionally, each gap between two words is tagged as `BAD` if one or more missing words should have been there, and `OK` otherwise. Note that number of tags for each target sentence is 2*N+1, where N is the number of tokens in the sentence. - A2/ Tokens are tagged as `OK` if they were correctly translated, and `BAD` otherwise. Gaps are not tagged. - B/ Sentence-level HTER scores. HTER (Human Translation Error Rate) is the ratio between the number of edits (insertions/deletions/replacements) needed and the reference translation length.
Not available.
null
2
4
--- annotations_creators: - expert-generated - machine-generated language_creators: - found language: - de - en - zh license: - unknown multilinguality: - translation size_categories: - 1K<n<10K source_datasets: - extended|wikipedia task_categories: - translation - text-classification task_ids: [] pretty_name: WMT20 - MultiLingual Quality Estimation (MLQE) Task2 tags: - translation-quality-estimation dataset_info: - config_name: en-de features: - name: translation dtype: translation: languages: - en - de - name: src_tags sequence: class_label: names: '0': BAD '1': OK - name: mt_tags sequence: class_label: names: '0': BAD '1': OK - name: pe dtype: string - name: hter dtype: float32 - name: alignments sequence: sequence: int32 splits: - name: train num_bytes: 6463930 num_examples: 7000 - name: test num_bytes: 425582 num_examples: 1000 - name: validation num_bytes: 927616 num_examples: 1000 download_size: 1377020 dataset_size: 7817128 - config_name: en-zh features: - name: translation dtype: translation: languages: - en - zh - name: src_tags sequence: class_label: names: '0': BAD '1': OK - name: mt_tags sequence: class_label: names: '0': BAD '1': OK - name: pe dtype: string - name: hter dtype: float32 - name: alignments sequence: sequence: int32 splits: - name: train num_bytes: 6786898 num_examples: 7000 - name: test num_bytes: 443740 num_examples: 1000 - name: validation num_bytes: 954710 num_examples: 1000 download_size: 1564953 dataset_size: 8185348 config_names: - en-de - en-zh --- # Dataset Card for WMT20 - MultiLingual Quality Estimation (MLQE) Task2 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WMT20 Quality Estimation Shared Task](http://www.statmt.org/wmt20/quality-estimation-task.html) - **Repository**: [Github repository](https://github.com/deep-spin/deep-spin.github.io/tree/master/docs/data/wmt2020_qe) - **Paper:** *Not available* ### Dataset Summary From the homepage: *This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task.* *Task 1 evaluates the application of QE for post-editing purposes. It consists of predicting:* - ***Word-level tags.*** *This is done both on source side (to detect which words caused errors) and target side (to detect mistranslated or missing words).* - ***Target.*** *Each token is tagged as either `OK` or `BAD`. Additionally, each gap between two words is tagged as `BAD` if one or more missing words should have been there, and `OK` otherwise. Note that number of tags for each target sentence is 2*N+1, where N is the number of tokens in the sentence.* - ***Source.*** *Tokens are tagged as `OK` if they were correctly translated, and `BAD` otherwise. Gaps are not tagged.* - ***Sentence-level HTER scores.*** *HTER (Human Translation Error Rate) is the ratio between the number of edits (insertions/deletions/replacements) needed and the reference translation length.* ### Supported Tasks and Leaderboards From the homepage: *For sentence-level QE, submissions are evaluated in terms of the Pearson's correlation metric for the sentence-level HTER prediction. For word-level QE, they will be evaluated in terms of MCC ([Matthews correlation coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)). These are the [official evaluation scripts](https://github.com/sheffieldnlp/qe-eval-scripts).* ### Languages There are two language pairs in this dataset: - English - German (`en` - `de`) - German - Chinese (`en` - `zh`) ## Dataset Structure ### Data Instances An example looks like this: ``` { 'translation': { 'en': 'favorite fish include cod , salmon , winter flounder , haddock , striped bass , pollock , hake , bluefish , and , in southern New England , Tautog .', 'de': 'zu den Lieblingsfischen gehören Kabeljau , Lachs , Winterflounder , Schellfisch , gestreifter Bass , Pollock , Seehecht , Rotbarsch und in Südengland Tautog .', } 'src_tags': [1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1], 'mt_tags': [1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1], 'pe': 'zu den Lieblingsfischen zählen Kabeljau , Lachs , Winterflunder , Schellfisch , Wolfsbarsch , Pollock , Seehecht , Bluefish und im Süden Neuenglands Tautog .', 'hter': 0.3199999928474426, 'alignments': [[2, 0], [2, 1], [2, 3], [3, 2], [3, 4], [4, 5], [5, 6], [6, 5], [7, 6], [8, 6], [9, 7], [10, 8], [10, 10], [11, 9], [12, 12], [13, 13], [14, 11], [15, 12], [15, 15], [16, 14], [17, 17], [19, 16], [20, 16], [21, 20], [22, 18], [23, 19], [23, 21], [24, 22], [25, 21], [26, 22], [27, 22], [28, 23], [29, 24]], } ``` ### Data Fields - `translation`: Dictionary with pairs (source,target). - src_lg: sequence of text in source language. - tgt_lg: sequence of text in target language. - `src_tags`: source word-level tags. `0`=`BAD`, `1`=`OK`. `[]` if N/A (only for test). - `mt_tags`: target word-level tags. `0`=`BAD`, `1`=`OK`. `[]` if N/A (only for test). - `pe`: post-edited version of NMT output. `""` if N/A (only for test). - `hter`: human translation error rate. `-10_000` if N/A (only for test). - `alignments`: Word aligments. List of pairs of integers. ### Data Splits There are 2 configurations in this dataset (one for each available language pair). Each configuration is composed of 7K examples for training, 1K for validation and 1K for (blind) test. ## Dataset Creation ### Curation Rationale The original text is extracted from Wikipedia. From the homepage: *Word-level labels have been obtained by using the alignments provided by the [TER](http://www.cs.umd.edu/~snover/tercom/) tool (settings: tokenised, case insensitive, exact matching only, disabling shifts by using the `-d 0` option) between machine translations and their post-edited versions. Shifts (word order errors) were not annotated as such (but rather as deletions + insertions) to avoid introducing noise in the annotation.* *HTER values are obtained deterministically from word-level tags. However, when computing HTER, we allow shifts in TER.* *The baseline system is a neural predictor-estimator approach implemented in [OpenKiwi](https://github.com/Unbabel/OpenKiwi) ([Kepler at al., 2019](https://arxiv.org/abs/1902.08646)), where the predictor model will be trained on the parallel data used to train the NMT model.* ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown ### Citation Information ``` Not available. ``` ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
xor_tydi_qa
2023-01-25T15:03:13.000Z
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "source_datasets:extended|tydiqa", "langu...
null
XOR-TyDi QA brings together for the first time information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and answer annotations that are retrieved from multilingual document collections. There are three sub-tasks: XOR-Retrieve, XOR-EnglishSpan, and XOR-Full.
@misc{asai2020xor, title={XOR QA: Cross-lingual Open-Retrieval Question Answering}, author={Akari Asai and Jungo Kasai and Jonathan H. Clark and Kenton Lee and Eunsol Choi and Hannaneh Hajishirzi}, year={2020}, eprint={2010.11856}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
1
4
--- annotations_creators: - crowdsourced language_creators: - expert-generated - found language: - ar - bn - fi - ja - ko - ru - te license: - mit multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original - extended|tydiqa task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: xor-tydi-qa pretty_name: XOR QA dataset_info: - config_name: xor-retrieve features: - name: question dtype: string - name: lang dtype: class_label: names: '0': ar '1': bn '2': fi '3': ja '4': ko '5': ru '6': te - name: answers dtype: string splits: - name: train num_bytes: 1698662 num_examples: 15250 - name: validation num_bytes: 259533 num_examples: 2110 - name: test num_bytes: 219046 num_examples: 2499 download_size: 3702288 dataset_size: 2177241 - config_name: xor-full features: - name: question dtype: string - name: lang dtype: class_label: names: '0': ar '1': bn '2': fi '3': ja '4': ko '5': ru '6': te - name: answers dtype: string splits: - name: train num_bytes: 7250913 num_examples: 61360 - name: validation num_bytes: 444672 num_examples: 3473 - name: test num_bytes: 706664 num_examples: 8176 download_size: 14018298 dataset_size: 8402249 --- # Dataset Card for XOR QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [XOR QA Homepage](https://nlp.cs.washington.edu/xorqa/) - **Repository:** [XOR QA Repository](https://github.com/AkariAsai/XORQA) - **Paper:** [XOR QA Paper](https://arxiv.org/abs/2010.11856) - **Leaderboard:** [XOR QA Leaderboard](https://nlp.cs.washington.edu/xorqa/) - **Point of Contact:** [Akari Asai](akari@cs.washington.edu) ### Dataset Summary XOR-TyDi QA brings together for the first time information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and answer annotations that are retrieved from multilingual document collections. ### Supported Tasks and Leaderboards There are three sub-tasks: XOR-Retrieve, XOR-EnglishSpan, and XOR-Full. - `XOR-retrieve`: XOR-Retrieve is a cross-lingual retrieval task where a question is written in a target language (e.g., Japanese) and a system is required to retrieve English paragraphs that answer the question. The dataset can be used to train a model for cross-lingual retrieval. Success on this task is typically measured by R@5kt, R@2kt (the recall by computing the fraction of the questions for which the minimal answer is contained in the top 5,000 / 2,000 tokens selected). This task has an active leaderboard which can be found at [leaderboard url](https://nlp.cs.washington.edu/xorqa/) - `XOR-English Span`: XOR-English Span is a cross-lingual retrieval task where a question is written in a target language (e.g., Japanese) and a system is required to output a short answer in English. The dataset can be used to train a model for cross-lingual retrieval. Success on this task is typically measured by F1, EM. This task has an active leaderboard which can be found at [leaderboard url](https://nlp.cs.washington.edu/xorqa/) - `XOR-Full`: XOR-Full is a cross-lingual retrieval task where a question is written in the target language (e.g., Japanese) and a system is required to output a short answer in a target language. Success on this task is typically measured by F1, EM, BLEU This task has an active leaderboard which can be found at [leaderboard url](https://nlp.cs.washington.edu/xorqa/) ### Languages The text in the dataset is available in 7 languages: Arabic `ar`, Bengali `bn`, Finnish `fi`, Japanese `ja`, Korean `ko`, Russian `ru`, Telugu `te` ## Dataset Structure ### Data Instances A typical data point comprises a `question`, it's `answer` the `language` of the question text and the split to which it belongs. ``` { "id": "-3979399588609321314", "question": "Сколько детей было у Наполео́на I Бонапа́рта?", "answers": ["сын"], "lang": "ru", "split": "train" } ``` ### Data Fields - `id`: An identifier for each example in the dataset - `question`: Open domain question - `answers`: The corresponding answer to the question posed - `lang`: BCP-47 language tag - `split`: identifier to differentiate train, validation and test splits ### Data Splits The data is split into a training, validation and test set for each of the two configurations. | | train | validation | test | |--------------|------:|-----------:|-----:| | XOR Retrieve | 15250 | 2113 | 2501 | | XOR Full | 61360 | 3179 | 8177 | ## Dataset Creation ### Curation Rationale This task framework reflects well real-world scenarios where a QA system uses multilingual document collections and answers questions asked by users with diverse linguistic and cultural backgrounds. Despite the common assumption that we can find answers in the target language, web re- sources in non-English languages are largely lim- ited compared to English (information scarcity), or the contents are biased towards their own cul- tures (information asymmetry). To solve these issues, XOR-TYDI QA (Asai et al., 2020) provides a benchmark for developing a multilingual QA system that finds answers in multiple languages. ### Source Data annotation pipeline consists of four steps: 1) collection of realistic questions that require cross-lingual ref- erences by annotating questions from TYDI QA without a same-language answer; 2) question translation from a target language to the pivot language of English where the missing informa- tion may exist; 3) answer span selection in the pivot language given a set of candidate documents; 4) answer verification and translation from the pivot language back to the original language. #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The Dataset is created by extending TyDiQA dataset and translating the questions into other languages. The answers are obtained by crowdsourcing the questions to Mechanical Turk workders ### Annotations #### Annotation process The English questions from TyDiQA are translated into other languages. The languages are chosen based on the availability of wikipedia data and the availability of tranlators. #### Who are the annotators? The translations are carried out using the professionla tranlation service (Gengo)[https://gengo.com] and the answers are annotated by MechanicalTurk workers ### Personal and Sensitive Information The dataset is created from wikipedia content and the QA task requires preserving the named entities, there by all the Wikipedia Named Entities are preserved in the data. Not much information has been provided about masking sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The people associated with the creation of the dataset are Akari Asai, Jungo Kasai, Jonathan H. Clark, Kenton Lee, Eunsol Choi, Hannaneh Hajishirzi ### Licensing Information XOR-TyDi QA is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license ### Citation Information ``` @article{xorqa, title = {XOR QA: Cross-lingual Open-Retrieval Question Answering}, author = {Akari Asai and Jungo Kasai and Jonathan H. Clark and Kenton Lee and Eunsol Choi and Hannaneh Hajishirzi} year = {2020} } ``` ### Contributions Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.
yoruba_gv_ner
2023-01-25T15:03:39.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:yo", "license:cc-by-3.0", "region:us" ]
null
The Yoruba GV NER dataset is a labeled dataset for named entity recognition in Yoruba. The texts were obtained from Yoruba Global Voices News articles https://yo.globalvoices.org/ . We concentrate on four types of named entities: persons [PER], locations [LOC], organizations [ORG], and dates & time [DATE]. The Yoruba GV NER data files contain 2 columns separated by a tab ('\t'). Each word has been put on a separate line and there is an empty line after each sentences i.e the CoNLL format. The first item on each line is a word, the second is the named entity tag. The named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. For every multi-word expression like 'New York', the first word gets a tag B-TYPE and the subsequent words have tags I-TYPE, a word with tag O is not part of a phrase. The dataset is in the BIO tagging scheme. For more details, see https://www.aclweb.org/anthology/2020.lrec-1.335/
@inproceedings{alabi-etal-2020-massive, title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Yorùbá} and {T}wi", author = "Alabi, Jesujoba and Amponsah-Kaakyire, Kwabena and Adelani, David and Espa{\\~n}a-Bonet, Cristina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.335", pages = "2754--2762", language = "English", ISBN = "979-10-95546-34-4", }
null
0
4
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - yo license: - cc-by-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Yoruba GV NER Corpus dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE config_name: yoruba_gv_ner splits: - name: train num_bytes: 358885 num_examples: 817 - name: validation num_bytes: 50161 num_examples: 117 - name: test num_bytes: 96518 num_examples: 237 download_size: 254347 dataset_size: 505564 --- # Dataset Card for Yoruba GV NER Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [Yoruba GV NER](https://github.com/ajesujoba/YorubaTwi-Embedding/tree/master/Yoruba/Yoruba-NER) - **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.335/ - **Leaderboard:** - **Point of Contact:** [David Adelani](mailto:didelani@lsv.uni-saarland.de) ### Dataset Summary The Yoruba GV NER is a named entity recognition (NER) dataset for Yorùbá language based on the [Global Voices news](https://yo.globalvoices.org/) corpus. Global Voices (GV) is a multilingual news platform with articles contributed by journalists, translators, bloggers, and human rights activists from around the world with a coverage of over 50 languages. Most of the texts used in creating the Yoruba GV NER are translations from other languages to Yorùbá. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Yorùbá. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [B-LOC, 0, 0, 0, 0], 'tokens': ['Tanzania', 'fi', 'Ajìjàgbara', 'Ọmọ', 'Orílẹ̀-èdèe'] } ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & times (DATE). (O) is used for tokens not considered part of any named entity. ### Data Splits Training (19,421 tokens), validation (2,695 tokens) and test split (5,235 tokens) ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Yorùbá. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The dataset is based on the news domain and was crawled from [Global Voices Yorùbá news](https://yo.globalvoices.org/). [More Information Needed] #### Who are the source language producers? The dataset contributed by journalists, translators, bloggers, and human rights activists from around the world. Most of the texts used in creating the Yoruba GV NER are translations from other languages to Yorùbá [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated by Jesujoba Alabi and David Adelani for the paper: [Massive vs. Curated Embeddings for Low-Resourced Languages: the case of Yorùbá and Twi](https://www.aclweb.org/anthology/2020.lrec-1.335/). [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by students of Saarland University, Saarbrücken, Germany . ### Licensing Information The data is under the [Creative Commons Attribution 3.0 ](https://creativecommons.org/licenses/by/3.0/) ### Citation Information ``` @inproceedings{alabi-etal-2020-massive, title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\`u}b{\'a} and {T}wi", author = "Alabi, Jesujoba and Amponsah-Kaakyire, Kwabena and Adelani, David and Espa{\~n}a-Bonet, Cristina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.335", pages = "2754--2762", language = "English", ISBN = "979-10-95546-34-4", } ``` ### Contributions Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
Atsushi/fungi_diagnostic_chars_comparison_japanese
2023-10-08T21:35:23.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ja", "license:cc-by-4.0", "region:us" ]
Atsushi
null
null
null
0
4
--- annotations_creators: - other language: - ja license: - cc-by-4.0 multilinguality: - monolingual source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification size_categories: - 100K<n<1M --- fungi_diagnostic_chars_comparison_japanese 大菌輪「識別形質まとめ」データセット 最終更新日:2023/10/9(R3-11401まで) ==== ### Languages Japanese This dataset is available in Japanese only. # 概要 Atsushi Nakajima(中島淳志)が個人で運営しているWebサイト[大菌輪](http://mycoscouter.coolblog.jp/daikinrin/) では、数千件以上の菌類分類学論文を「論文3行まとめ」という形で要約および索引付け(インデキシング)した情報を提供しています。 その一環として、ある菌と別の菌の「共通する」あるいは「異なる」識別形質 (diagnostic characters) に関する記述を人手で抽出しています。 本データセットは、抽出された識別形質の一覧に、「色/color」、「形状/shape」などのカテゴリを半自動的に付与して集積したものです。 「論文3行まとめ」は毎日更新していますが、本データセットの更新はおおむね1ヶ月に一度とする予定です。 ## 関連データセット 「論文3行まとめ」 [Atsushi/fungi_indexed_mycological_papers_japanese](https://huggingface.co/datasets/Atsushi/fungi_indexed_mycological_papers_japanese) 「Trait Circusデータセット」(統制形質) [Atsushi/fungi_trait_circus_database](https://huggingface.co/datasets/Atsushi/fungi_trait_circus_database) ## 各カラムの説明 * R3ID … 大菌輪「論文3行まとめ」のIDです。 * No … 各識別文を一意のIDで区別するために、各R3IDにおいてナンバリングしたものです。 * comparison_source … 比較元の分類群(学名)です。 * comparison_target … 比較先の分類群(学名)です。 * sentence … 識別文です。全て日本語です。 * label …半自動的に付与されたカテゴリです(人手で修正していますが、ダブルチェックは行っていないので誤分類もあると思います)。以下の25のカテゴリが存在します。 * サイズ/size * 分子系統解析/molecular_phylogenetic_analysis * 形状/shape * 色/color * 地理的分布/geographical_distribution * 生息環境/habitat * 表面性状/surface_characteristics * 構造/structure * 有無/presence * 形態全般/general_morphology * 位置/position * 二次代謝産物/secondary_metabolite * 呈色反応/chemical_reaction * 数量/amount * 発達/development * 生理学的形質/physiological_characters * 分類/classification * 資化・発酵能/assimilation_and_fermentation * 質感/texture * 味・臭い/taste_and_smell * 病害・病原性関連/disease_and_pathogenecity * 全般/general_characters * 耐性・感受性/resistance_and_susceptibility * 栄養摂取様式/nutrition_style * 未分類/unclassified * common_or_different … 共通する形質は「1」、異なる形質は「0」です。 * data_source … 各情報の 出典(文献)のURLです。
BeIR/beir
2022-10-21T15:30:43.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
null
3
4
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
Fraser/mnist-text-default
2021-02-22T10:48:20.000Z
[ "region:us" ]
Fraser
MNIST dataset adapted to a text-based representation. This allows testing interpolation quality for Transformer-VAEs. System is heavily inspired by Matthew Rayfield's work https://youtu.be/Z9K3cwSL6uM Works by quantising each MNIST pixel into one of 64 characters. Every sample has an up & down version to encourage the model to learn rotation invarient features. Use `.array_to_text(` and `.text_to_array(` methods to test your generated data. Data format: - text: (30 x 28 tokens, 840 tokens total): Textual representation of MNIST digit, for example: ``` 00 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 01 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 02 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 03 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 04 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 05 down ! ! ! ! ! ! ! ! ! ! ! ! ! % % % @ C L ' J a ^ @ ! ! ! ! 06 down ! ! ! ! ! ! ! ! ( * 8 G K ` ` ` ` ` Y L ` ] Q 1 ! ! ! ! 07 down ! ! ! ! ! ! ! - \ ` ` ` ` ` ` ` ` _ 8 5 5 / * ! ! ! ! ! 08 down ! ! ! ! ! ! ! % W ` ` ` ` ` R N ^ ] ! ! ! ! ! ! ! ! ! ! 09 down ! ! ! ! ! ! ! ! 5 H ; ` ` T # ! + G ! ! ! ! ! ! ! ! ! ! 10 down ! ! ! ! ! ! ! ! ! $ ! G ` 7 ! ! ! ! ! ! ! ! ! ! ! ! ! ! 11 down ! ! ! ! ! ! ! ! ! ! ! C ` P ! ! ! ! ! ! ! ! ! ! ! ! ! ! 12 down ! ! ! ! ! ! ! ! ! ! ! # P ` 2 ! ! ! ! ! ! ! ! ! ! ! ! ! 13 down ! ! ! ! ! ! ! ! ! ! ! ! ) ] Y I < ! ! ! ! ! ! ! ! ! ! ! 14 down ! ! ! ! ! ! ! ! ! ! ! ! ! 5 ] ` ` > ' ! ! ! ! ! ! ! ! ! 15 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! , O ` ` F ' ! ! ! ! ! ! ! ! 16 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! % 8 ` ` O ! ! ! ! ! ! ! ! 17 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! _ ` _ 1 ! ! ! ! ! ! ! 18 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! , A N ` ` T ! ! ! ! ! ! ! ! 19 down ! ! ! ! ! ! ! ! ! ! ! ! * F Z ` ` ` _ N ! ! ! ! ! ! ! ! 20 down ! ! ! ! ! ! ! ! ! ! ' = X ` ` ` ` S 4 ! ! ! ! ! ! ! ! ! 21 down ! ! ! ! ! ! ! ! & 1 V ` ` ` ` R 5 ! ! ! ! ! ! ! ! ! ! ! 22 down ! ! ! ! ! ! % K W ` ` ` ` Q 5 # ! ! ! ! ! ! ! ! ! ! ! ! 23 down ! ! ! ! . L Y ` ` ` ` ^ B # ! ! ! ! ! ! ! ! ! ! ! ! ! ! 24 down ! ! ! ! C ` ` ` V B B % ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 25 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 26 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 27 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ``` - label: Just a number with the texts matching label.
@dataset{dataset, author = {Fraser Greenlee}, year = {2021}, month = {1}, pages = {}, title = {MNIST text dataset.}, doi = {} }
null
0
4
MNIST dataset adapted to a text-based representation. This allows testing interpolation quality for Transformer-VAEs. System is heavily inspired by Matthew Rayfield's work https://youtu.be/Z9K3cwSL6uM Works by quantising each MNIST pixel into one of 64 characters. Every sample has an up & down version to encourage the model to learn rotation invarient features. Use `.array_to_text(` and `.text_to_array(` methods to test your generated data. Data format: - text: (30 x 28 tokens, 840 tokens total): Textual representation of MNIST digit, for example: ``` 00 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 01 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 02 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 03 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 04 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 05 down ! ! ! ! ! ! ! ! ! ! ! ! ! % % % @ C L ' J a ^ @ ! ! ! ! 06 down ! ! ! ! ! ! ! ! ( * 8 G K ` ` ` ` ` Y L ` ] Q 1 ! ! ! ! 07 down ! ! ! ! ! ! ! - \ ` ` ` ` ` ` ` ` _ 8 5 5 / * ! ! ! ! ! 08 down ! ! ! ! ! ! ! % W ` ` ` ` ` R N ^ ] ! ! ! ! ! ! ! ! ! ! 09 down ! ! ! ! ! ! ! ! 5 H ; ` ` T # ! + G ! ! ! ! ! ! ! ! ! ! 10 down ! ! ! ! ! ! ! ! ! $ ! G ` 7 ! ! ! ! ! ! ! ! ! ! ! ! ! ! 11 down ! ! ! ! ! ! ! ! ! ! ! C ` P ! ! ! ! ! ! ! ! ! ! ! ! ! ! 12 down ! ! ! ! ! ! ! ! ! ! ! # P ` 2 ! ! ! ! ! ! ! ! ! ! ! ! ! 13 down ! ! ! ! ! ! ! ! ! ! ! ! ) ] Y I < ! ! ! ! ! ! ! ! ! ! ! 14 down ! ! ! ! ! ! ! ! ! ! ! ! ! 5 ] ` ` > ' ! ! ! ! ! ! ! ! ! 15 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! , O ` ` F ' ! ! ! ! ! ! ! ! 16 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! % 8 ` ` O ! ! ! ! ! ! ! ! 17 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! _ ` _ 1 ! ! ! ! ! ! ! 18 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! , A N ` ` T ! ! ! ! ! ! ! ! 19 down ! ! ! ! ! ! ! ! ! ! ! ! * F Z ` ` ` _ N ! ! ! ! ! ! ! ! 20 down ! ! ! ! ! ! ! ! ! ! ' = X ` ` ` ` S 4 ! ! ! ! ! ! ! ! ! 21 down ! ! ! ! ! ! ! ! & 1 V ` ` ` ` R 5 ! ! ! ! ! ! ! ! ! ! ! 22 down ! ! ! ! ! ! % K W ` ` ` ` Q 5 # ! ! ! ! ! ! ! ! ! ! ! ! 23 down ! ! ! ! . L Y ` ` ` ` ^ B # ! ! ! ! ! ! ! ! ! ! ! ! ! ! 24 down ! ! ! ! C ` ` ` V B B % ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 25 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 26 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 27 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ``` - label: Just a number with the texts matching label.
GEM/mlb_data_to_text
2022-10-24T15:30:20.000Z
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:other", "data-to-text", "region:us" ]
GEM
The MLB dataset for data to text generation contains Major League Baseball games statistics and their human-written summaries.
@inproceedings{puduppully-etal-2019-data, title = "Data-to-text Generation with Entity Modeling", author = "Puduppully, Ratish and Dong, Li and Lapata, Mirella", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1195", doi = "10.18653/v1/P19-1195", pages = "2023--2035", }
null
1
4
--- annotations_creators: - none language_creators: - unknown language: - en license: - other multilinguality: - unknown size_categories: - unknown source_datasets: - original task_categories: - table-to-text task_ids: [] pretty_name: mlb_data_to_text tags: - data-to-text --- # Dataset Card for GEM/mlb_data_to_text ## Dataset Description - **Homepage:** https://github.com/ratishsp/mlb-data-scripts - **Repository:** https://github.com/ratishsp/mlb-data-scripts - **Paper:** https://aclanthology.org/P19-1195 - **Leaderboard:** N/A - **Point of Contact:** Ratish Puduppully ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/mlb_data_to_text). ### Dataset Summary The MLB dataset is an English sport-related data-to-text dataset in the baseball domain. The input is a large table with results of a game and the output is a description of the game. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/mlb_data_to_text') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/mlb_data_to_text). #### website [Github](https://github.com/ratishsp/mlb-data-scripts) #### paper [ACL Anthology](https://aclanthology.org/P19-1195) #### authors Ratish Puduppully, Li Dong, Mirella Lapata ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Github](https://github.com/ratishsp/mlb-data-scripts) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Github](https://github.com/ratishsp/mlb-data-scripts) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [ACL Anthology](https://aclanthology.org/P19-1195) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{puduppully-etal-2019-data, title = "Data-to-text Generation with Entity Modeling", author = "Puduppully, Ratish and Dong, Li and Lapata, Mirella", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1195", doi = "10.18653/v1/P19-1195", pages = "2023--2035", } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Ratish Puduppully #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> ratishpuduppully@gmail.com #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English` #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> other: Other license #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> The dataset can be used to study data-to-text generation. The dataset is in sports domain. It pairs statistics of Major League Baseball (MLB) game with its summary. The summary is in the form of a document containing an average of 540 tokens. Thus it is useful to study long document generation. #### Add. License Info <!-- info: What is the 'other' license of the dataset? --> <!-- scope: periscope --> Restricted to non-commercial research purposes. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Data-to-Text #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> Produce a summary of MLB game from its statistics. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> University of Edinburgh #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Ratish Puduppully, Li Dong, Mirella Lapata ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> ``` features = datasets.Features( { "home_name": datasets.Value("string"), "box_score": [ { "p_l": datasets.Value("string"), "last_name": datasets.Value("string"), "p_h": datasets.Value("string"), "sac": datasets.Value("string"), "p_bb": datasets.Value("string"), "pos": datasets.Value("string"), "ao": datasets.Value("string"), "p_bf": datasets.Value("string"), "cs": datasets.Value("string"), "hbp": datasets.Value("string"), "ab": datasets.Value("string"), "full_name": datasets.Value("string"), "p_w": datasets.Value("string"), "go": datasets.Value("string"), "fldg": datasets.Value("string"), "p_bs": datasets.Value("string"), "avg": datasets.Value("string"), "p_r": datasets.Value("string"), "p_s": datasets.Value("string"), "lob": datasets.Value("string"), "first_name": datasets.Value("string"), "p_sv": datasets.Value("string"), "p_so": datasets.Value("string"), "p_save": datasets.Value("string"), "p_hr": datasets.Value("string"), "po": datasets.Value("string"), "p_ip1": datasets.Value("string"), "p_ip2": datasets.Value("string"), "bb": datasets.Value("string"), "ops": datasets.Value("string"), "p_hld": datasets.Value("string"), "bo": datasets.Value("string"), "p_loss": datasets.Value("string"), "e": datasets.Value("string"), "p_game_score": datasets.Value("string"), "p_win": datasets.Value("string"), "a": datasets.Value("string"), "p_era": datasets.Value("string"), "d": datasets.Value("string"), "p_out": datasets.Value("string"), "h": datasets.Value("string"), "p_er": datasets.Value("string"), "p_np": datasets.Value("string"), "hr": datasets.Value("string"), "r": datasets.Value("string"), "so": datasets.Value("string"), "t": datasets.Value("string"), "rbi": datasets.Value("string"), "team": datasets.Value("string"), "sb": datasets.Value("string"), "slg": datasets.Value("string"), "sf": datasets.Value("string"), "obp": datasets.Value("string"), } ], "home_city": datasets.Value("string"), "vis_name": datasets.Value("string"), "play_by_play": [{ "top": [{ "runs": datasets.Value("string"), "scorers": [ datasets.Value("string") ], "pitcher": datasets.Value("string"), "o": datasets.Value("string"), "b": datasets.Value("string"), "s": datasets.Value("string"), "batter": datasets.Value("string"), "b1": [ datasets.Value("string") ], "b2": [ datasets.Value("string") ], "b3": [ datasets.Value("string") ], "event": datasets.Value("string"), "event2": datasets.Value("string"), "home_team_runs": datasets.Value("string"), "away_team_runs": datasets.Value("string"), "rbi": datasets.Value("string"), "error_runs": datasets.Value("string"), "fielder_error": datasets.Value("string") } ], "bottom": [{ "runs": datasets.Value("string"), "scorers": [ datasets.Value("string") ], "pitcher": datasets.Value("string"), "o": datasets.Value("string"), "b": datasets.Value("string"), "s": datasets.Value("string"), "batter": datasets.Value("string"), "b1": [ datasets.Value("string") ], "b2": [ datasets.Value("string") ], "b3": [ datasets.Value("string") ], "event": datasets.Value("string"), "event2": datasets.Value("string"), "home_team_runs": datasets.Value("string"), "away_team_runs": datasets.Value("string"), "rbi": datasets.Value("string"), "error_runs": datasets.Value("string"), "fielder_error": datasets.Value("string") } ], "inning": datasets.Value("string") } ], "vis_line": { "innings": [{ "inn": datasets.Value("string"), "runs": datasets.Value("string") } ], "result": datasets.Value("string"), "team_runs": datasets.Value("string"), "team_hits": datasets.Value("string"), "team_errors": datasets.Value("string"), "team_name": datasets.Value("string"), "team_city": datasets.Value("string") }, "home_line": { "innings": [{ "inn": datasets.Value("string"), "runs": datasets.Value("string") } ], "result": datasets.Value("string"), "team_runs": datasets.Value("string"), "team_hits": datasets.Value("string"), "team_errors": datasets.Value("string"), "team_name": datasets.Value("string"), "team_city": datasets.Value("string") }, "vis_city": datasets.Value("string"), "day": datasets.Value("string"), "summary": [ datasets.Value("string"), ], "gem_id": datasets.Value("string") } ``` #### Reason for Structure <!-- info: How was the dataset structure determined? --> <!-- scope: microscope --> The high level structure contains the following attributes: home_name, vis_name, home_city, vis_city, summary, summary_eval, day, gem_id, box_score, play_by_play, home_line, vis_line. The attributes home_name, vis_name, home_city, vis_city and day are string values. The attribute "summary" contains the summary in the form of a list of tokens. The attribute "summary_eval" contains the summary in the form of a string of tokens. The difference from "summary" field is that "summary_eval" doesn't contain "*NEWPARAGRAPH*" delimiters to separate the paragraphs. "summary_eval" field should be used to evaluate model outputs. "summary" field may be used during the training process. box_score contains the box score statistics of the players in the game. It is in the form of a list (of average size 90), with each element describing the statistics of a player. The box score statistics contain 53 attributes. The description of the attributes is given below. The descriptions of most of the attributes is obtained from [mlb.com](https://www.mlb.com/glossary/standard-stats). - r : Runs scored by a player in the game. - rbi Runs Batted In (RBI): action of a batter results in a run scored by other players in the team. - pos Position of the player. - avg Batting Average. It is an indicator of the hits in the players' career. - bb A walk occurs when a pitcher throws four pitches out of the strike zone, none of which are swung at by the hitter. - hr Batter hits the ball in the air over the outfield fence. - p_r Runs given by a pitcher in the game. - p_bb Walks allowed by pitcher in a game. - p_h Hits allowed by pitcher in a game. - p_hr Home runs allowed by pitcher in a game. - p_er Earned Run (ER): An earned run is any run that scores against a pitcher. - p_era Earned Run Average (ERA): Earned run average represents the number of earned runs a pitcher allows per nine innings. - p_np Number of Pitches: A pitcher's total number of pitches is determined by all the pitches he throws in game. - p_ip1 Innings Pitched (IP1): Innings pitched measures the number of innings a pitcher remains in a game. Because there are three outs in an inning, each out recorded represents one-third of an inning pitched. - p_ip2 Innings Pitched (IP2): Innings pitched measures the number of innings a pitcher remains in a game. Because there are three outs in an inning, each out recorded represents one-third of an inning pitched. - p_w A pitcher receives a win when he is the pitcher of record when his team takes the lead for good. - p_l A pitcher receives a loss when a run that is charged to him proves to be the go-ahead run in the game, giving the opposing team a lead it never gives up. - p_so A strikeout occurs when a pitcher throws any combination of three swinging or looking strikes to a hitter. - p_save Save: A save is awarded to the relief pitcher who finishes a game for the winning team. A pitcher cannot receive a save and a win in the same game. - p_sv Saves: The count of saves recorded by a pitcher in his career. - sac A sacrifice fly occurs when a batter hits a fly-ball out to the outfield or foul territory that allows a runner to score. - p_bf Batters faced is simply a count of the number of total plate appearances against a certain pitcher or team. In a perfect game -- with 27 outs -- a pitcher will record 27 batters faced. - cs A caught stealing occurs when a runner attempts to steal but is tagged out before reaching second base, third base or home plate. - hbp A hit-by-pitch occurs when a batter is struck by a pitched ball without swinging at it. He is awarded first base as a result. - ab An official at-bat comes when a batter reaches base via a fielder's choice, hit or an error (not including catcher's interference) or when a batter is put out on a non-sacrifice. - p_bs A blown save occurs when a relief pitcher enters a game in a save situation, but allows the tying run to score. - p_s The count of strikes thrown by a pitcher - lob Left on base can be viewed as both an individual statistic or as a team statistic. In an individual batter's case, it refers to how many men remain on base after that batter makes an out at the plate, as the batter has failed to do his job to score those runners -- or at least put himself in a position to score. In a team's case or in an individual pitcher's case, it refers to the number of men who remain on base at the end of an inning. - po A fielder is credited with a putout when he is the fielder who physically records the act of completing an out -- whether it be by stepping on the base for a forceout, tagging a runner, catching a batted ball, or catching a third strike - ops OPS adds on-base percentage and slugging percentage to get one number that unites the two. It's meant to combine how well a hitter can reach base, with how well he can hit for average and for power. - p_hld A hold occurs when a relief pitcher enters the game in a save situation and maintains his team's lead for the next relief pitcher, while recording at least one out. - p_loss True/False- Indicates losing pitcher - e A fielder is given an error if, in the judgment of the official scorer, he fails to convert an out on a play that an average fielder should have made. - p_win True/False- Indicates winning pitcher - a An assist is awarded to a fielder who touches the ball before a putout is recorded by another fielder. - h A hit occurs when a batter strikes the baseball into fair territory and reaches base without doing so via an error or a fielder's choice. - so A strikeout of a batter - team Team of the player - sb A stolen base occurs when a baserunner advances by taking a base to which he isn't entitled. - slg Slugging percentage represents the total number of bases a player records per at-bat. Unlike on-base percentage, slugging percentage deals only with hits and does not include walks and hit-by-pitches in its equation. - sf A sacrifice fly occurs when a batter hits a fly-ball out to the outfield or foul territory that allows a runner to score. - obp OBP refers to how frequently a batter reaches base per plate appearance. Times on base include hits, walks and hit-by-pitches, but do not include errors, times reached on a fielder's choice or a dropped third strike. The description of attributes in play-by-play is below: - batter Batter in the play. - pitcher Pitcher in play. - b1 Player/s at first base position. - b2 Player/s at second base position. - b3 Player/s at third base position. - scorers Player/s scored in the play. - fielder_error Player committed field error. - event Event of the play such as single, double, home run etc. - event2 Second event of the play such as wild pitch, error etc. - inning Inning of the play. - top/ bottom If home team is batting it is bottom and if away team is batting it is top. - o Count of outs - b Count of balls - s Count of strikes - r Count of runs - rbi Count of runs batted in (rbi) - error_runs Runs due to error - home_team_runs Score of home team - vis_team_runs Score of visiting team `home_line` and `vis_line` contain string value pairs for `team_name`, `team_city`, `team_runs`, `team_hits`, `team_error`, `result`, and a list of runs scored in each inning. #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> There are three splits in the dataset: train, validation and test #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> The splits are random. ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> This dataset can verify if models are capable of long document generation. The challenges in long document generation conditioned on input tables include ensuring coherent output, staying faithful to the input, ensuring fluent output and avoiding repetition of text. Such aspects can be verified on models trained on this dataset #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> yes #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> no #### Difference from other GEM datasets <!-- info: What else sets this dataset apart from other similar datasets in GEM? --> <!-- scope: microscope --> Compared to the existing RotoWire (Wiseman et al. 2017) dataset, MLB summaries are longer (approximately by 50%) and the input records are richer and more structured (with the addition of play-by-play). Moreover, the MLB dataset is five times larger in terms of data size (i.e., pairs of tables and game summaries). #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> Long document generation, coherent ordering of information, faithfulness to the input statistics, fluency in generation and avoiding repetition of text. ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> yes #### GEM Modifications <!-- info: What changes have been made to he original dataset? --> <!-- scope: periscope --> `data points removed` #### Modification Details <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification --> <!-- scope: microscope --> Some examples have been removed from training dataset which satisfied the below criteria: 1. The examples in training dataset which overlapped with validation/test. 2. Some examples which described washed out games. #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> The [research paper](https://aclanthology.org/P19-1195) is a good resource ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Automatic evaluation measure can evaluate the factuality, content selection, content ordering and the fluency of the model output. The factuality, content selection and content ordering is measured using an Information Extraction based evaluation approach introduced by Wiseman et al (2017). The fluency is measured using BLEU. #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `Other: Other Metrics` #### Other Metrics <!-- info: Definitions of other metrics --> <!-- scope: periscope --> Wiseman et al. (2017) define three metrics induced from the outputs of an Information Extraction model which is run on the model/human-written game summaries . Let ŷ be the gold summary and y the model output. • Relation Generation (RG) measures the precision and count of relations extracted from y that also appear in records r. • Content Selection (CS) measures the precision and recall of relations extracted from y that are also extracted from ŷ. • Content Ordering (CO) measures the complement of the normalized Damerau-Levenshtein distance (Brill and Moore, 2000) between the sequences of relations extracted from y and ŷ #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> We have reused the automatic metrics based on Information Extraction evaluation introduced by Wiseman et al (2017). For human evaluation, we conducted studies to evaluate the factuality, coherence, grammaticality and conciseness. #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes #### Relevant Previous Results <!-- info: What are the most relevant previous results for this task/dataset? --> <!-- scope: microscope --> The most relevant previous results for dataset are in the TACL 2021 paper on [Data-to-text Generation with Macro Planning](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00381/101876/Data-to-text-Generation-with-Macro-Planning) ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> This dataset was curated to complement an existing data-to-text generation dataset (RotoWire by Wiseman et al. 2017) which focuses on long document generation. Compared to RotoWire , MLB summaries are longer (approximately by 50%) and the input records are richer and more structured (with the addition of play-by-play). Moreover, the MLB dataset is five times larger in terms of data size (i.e., pairs of tables and game summaries) #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> The goal is to study automatic generation of long documents in a data-to-text setting. The generated summaries should exhibit coherent ordering of content, be faithful to the input statistics, be fluent and avoid repetition of text. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> no ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Single website` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> The game summaries are produced by professional writers. #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> The language focuses on the sports domain. #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> not validated #### Data Preprocessing <!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) --> <!-- scope: microscope --> Game summaries were tokenized using NLTK (Bird et al., 2009) and hyphenated words were separated. Sentences containing quotes were removed as they included opinions and non-factual statements unrelated to the input tables. Sometimes MLB summaries contain a "Game notes" section with incidental information which was also removed. #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> not filtered ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> none #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no #### Justification for Using the Data <!-- info: If not, what is the justification for reusing the data? --> <!-- scope: microscope --> The copyright remains with the original data creators and the usage permission is restricted to non-commercial uses. ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> yes/very likely #### Categories of PII <!-- info: What categories of PII are present or suspected in the data? --> <!-- scope: periscope --> `sensitive information`, `generic PII` #### Any PII Identification? <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? --> <!-- scope: periscope --> no identification ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> no ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> unsure ## Considerations for Using the Data ### PII Risks and Liability ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `research use only` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `research use only` ### Known Technical Limitations
Gabriel/quora_swe
2022-10-22T09:39:38.000Z
[ "task_categories:text-retrieval", "task_categories:text-classification", "task_ids:semantic-similarity-classification", "size_categories:10K<n<100K", "language:sv", "license:mit", "question-pairing", "semantic-search", "region:us" ]
Gabriel
null
null
null
0
4
--- language: - sv license: - mit size_categories: - 10K<n<100K task_categories: - text-retrieval - text-classification task_ids: - semantic-similarity-classification tags: - question-pairing - semantic-search --- # Dataset Card for "quora_swe" The dataset quora_swe is a subset of the automatically translated (MNT) Swedish Semantic Textual Similarity dataset: quora-deduplicates .
HHousen/ParaSCI
2021-11-24T03:38:25.000Z
[ "arxiv:2101.08382", "region:us" ]
HHousen
null
null
null
1
4
Reformatted version of the ParaSCI dataset from [ParaSCI: A Large Scientific Paraphrase Dataset for Longer Paraphrase Generation](https://arxiv.org/abs/2101.08382). Data retrieved from [dqxiu/ParaSCI](https://github.com/dqxiu/ParaSCI).
Karavet/ILUR-news-text-classification-corpus
2022-10-21T16:06:12.000Z
[ "task_categories:text-classification", "multilinguality:monolingual", "language:hy", "license:apache-2.0", "region:us" ]
Karavet
null
null
null
0
4
--- language: - hy task_categories: [news-classification, text-classification] multilinguality: [monolingual] task_ids: [news-classification, text-classification] license: - apache-2.0 --- ## Table of Contents - [Table of Contents](#table-of-contents) - [News Texts Dataset](#news-texts-dataset) ## News Texts Dataset We release a dataset of over 12000 news articles from [iLur.am](http://www.ilur.am/), categorized into 7 classes: sport, politics, weather, economy, accidents, art, society. The articles are split into train (2242k tokens) and test sets (425k tokens). For more details, refer to the [paper](https://arxiv.org/ftp/arxiv/papers/1906/1906.03134.pdf).
RohanAiLab/persian_news_dataset
2022-10-21T16:13:59.000Z
[ "task_categories:text-classification", "task_ids:language-modeling", "task_ids:multi-class-classification", "source_datasets:original", "language:fa", "region:us" ]
RohanAiLab
persian_news_dataset is a collection of 5 million news articles. News articles have been gathered from more than 10 news agencies for the last 12 years. The dataset is provided by Rohan AI lab for research purposes. for more information refer to this link:
https://saied71.github.io/RohanAiLab/, author={Saied Alimoradi}, year={2021} }
null
1
4
--- pretty_name: persian_news_datset language: - fa source_datasets: - original task_categories: - text-classification - sequence-modeling task_ids: - language-modeling - multi-class-classification --- # Persian_News_Dataset # Dataset Summary persian_news_dataset is a collection of 5 million news articles. News articles have been gathered from more than 10 news agencies for the last 12 years. This dataset can be used in different NLP tasks like language modeling, classification, supervised topic modeling,... This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html) # Description As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset: ``` text :سه‌شنبه شب از دور برگشت مرحله نیمه‌نهایی لیگ قهرمانان اروپا، منچسترسیتی در ورزشگاه «اتحاد» میزبان پاری‌سن‌ژرمن بود و با ارائه نمایشی حساب شده و تحسین برانگیز به پیروزی دو بر صفر دست یافت.بازی رفت در پاریس با برتری دو بر یک سیتی به اتمام رسیده بود و با این اوصاف تیم تحت هدایت «پپ گواردیولا» در مجموع با پیروزی چهار بر یک، راهی فینال شد.بارش برف موجب سفیدپوش شدن زمین شده بود و همین امر بر عملکرد تیم‌ها تاثیر گذاشت. دیدار در حالی آغاز به کار کرد که «امباپه» ستاره پاریسی‌ها که به تازگی از مصدومیت رهایی پیدا کرده است، نیمکت‌نشین بود.بازی با حملات میهمان آغاز شد و در دقیقه هفتم داور هلندی با تصمیمی عجیب اعتقاد داشت توپ به دست «زینچنکو» مدافع سیتی برخورد کرده و نقطه پنالتی را نشان داد، اما با استفاده از سیستم کمک داور ویدئویی، پنالتی پس گرفته شد. سیتی خیلی زود به هدفش رسید و در دقیقه ۱۰ حرکت عالی او و پاس به «دی‌بروین» موجب شد تا توپ در یک رفت و برگشت به «ریاض محرز» رسیده و این بازیکن الجزایری گل نخست بازی را برای میزبان به ارمغان آورد.در دقیقه ۱۶ ضربه سر «مارکینیوش» مدافع پیش‌تاخته پاری‌سن‌ژرمن با بدشانسی به تیرک دروازه سیتی برخورد کرد.در ادامه برای دقایقی، بازیکنان در میانه میدان خطاهای متعددی انجام دادند و این امر موجب ایجاد چند درگیری شد.هرچند نماینده فرانسه درپی جبران مافات بود اما برنامه‌ای برای رسیدن به این مهم نداشت تا نیمه نخست با همین یک گل همراه شود.در نیمه دوم هم حملات پاریسی‌ها سودی نداشت و در طرف مقابل منچسترسیتی، بازی بسیار هوشمندانه‌ای ارائه کرد.در دقیقه ۶۲ و در ضد حمله‌ای برق آسا، «فیل فودن» با پاسی عالی توپ را به «ریاض محرز» رساند تا این بازیکن گل دوم خود و تیمش را ثبت کرده و سند صعود سیتی به فینال را امضا کند.در دقیقه ۶۸ «آنخل دی‌ماریا» وینگر آرژانتینی تیم پاری‌سن‌ژرمن پس از درگیری با «فرناندینو» با کارت قرمز داور از زمین اخراج شد تا کار تیمش تمام شود.در این بازی پاری‌سن‌ژرمن با تفکرات «پوچتینو»، طراحی حملات خود را به «نیمار» سپرده بود اما این بازیکن مطرح برزیلی با حرکات انفرادی بیش از از اندازه، عملکرد خوبی نداشت و حملات تیمش را خراب کرد.در نهایت بازی با پیروزی سیتی همراه شد و مالکان ثروتمند منچسترسیتی به آرزوی خود رسیده و پس از سال‌ها سرمایه‌گذاری به دیدار نهایی رسیدند. این اولین حضور سیتی در فینال لیگ قهرمانان اروپا است.چهارشنبه شب در دیگر دیدار دور برگشت نیمه‌نهایی، چلسی انگلیس در ورزشگاه «استمفورد بریج» شهر لندن پذیرای رئال‌مادرید اسپانیا است. بازی رفت با تساوی یک بر یک به اتمام رسید title:آرزوی سیتی برآورده شد؛ صعود شاگردان «گواردیولا» به فینال category:ورزش ``` # Citation ``` rohanailab@gmail.com title={persian_news_dataset}, author={Saied Alimoradi}, year={2021} } ```
Tevatron/wikipedia-squad-corpus
2021-09-23T02:17:59.000Z
[ "region:us" ]
Tevatron
null
@inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", }
null
1
4
Entry not found
abdusah/masc_dev
2022-07-01T15:28:05.000Z
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:ar", "license:cc-by-nc-4.0", "region:us" ]
abdusah
null
null
null
0
4
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar license: - cc-by-nc-4.0 multilinguality: [] paperswithcode_id: [] pretty_name: 'MASC' size_categories: source_datasets: [] task_categories: [] task_ids: [] --- # Dataset Card for MASC: MASSIVE ARABIC SPEECH CORPUS ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus - **Repository:** - **Paper:** https://dx.doi.org/10.21227/e1qb-jv46 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This corpus is a dataset that contains 1,000 hours of speech sampled at 16~kHz and crawled from over 700 YouTube channels. MASC is multi-regional, multi-genre, and multi-dialect dataset that is intended to advance the research and development of Arabic speech technology with the special emphasis on Arabic speech recognition ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Multi-dialect Arabic ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields #### masc_dev - speech - sampling_rate - target_text (label) ### Data Splits #### masc_dev - train: 100 - test: 40 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information Note: this is a small development set for testing. ### Dataset Curators [More Information Needed] ### Licensing Information CC 4.0 ### Citation Information [More Information Needed] ### Contributions Mohammad Al-Fetyani, Muhammad Al-Barham, Gheith Abandah, Adham Alsharkawi, Maha Dawas, August 18, 2021, "MASC: Massive Arabic Speech Corpus", IEEE Dataport, doi: https://dx.doi.org/10.21227/e1qb-jv46.
biu-nlp/qa_srl2018
2022-10-19T06:16:06.000Z
[ "region:us" ]
biu-nlp
The dataset contains question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contain a verb predicate in the sentence; the answers are phrases in the sentence. This dataset, a.k.a "QASRL Bank", "QASRL-v2" or "QASRL-LS" (Large Scale), was constructed via crowdsourcing.
@inproceedings{fitzgerald2018large, title={Large-Scale QA-SRL Parsing}, author={FitzGerald, Nicholas and Michael, Julian and He, Luheng and Zettlemoyer, Luke}, booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={2051--2060}, year={2018} }
null
1
4
Entry not found
classla/FRENK-hate-hr
2022-10-21T07:46:28.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:hr", "license:other", "hate-speech-detection", "offensive-language", "arxiv:1906.02045", "region:us" ]
classla
The FRENK Datasets of Socially Unacceptable Discourse in Croatian.
@misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} }
null
0
4
--- language: - hr license: - other size_categories: - 1K<n<10K task_categories: - text-classification task_ids: [] tags: - hate-speech-detection - offensive-language --- # Offensive language dataset of Croatian comments FRENK 1.0 Croatian subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [English subset](https://huggingface.co/datasets/5roop/FRENK-hate-en), [Slovenian subset](https://huggingface.co/datasets/5roop/FRENK-hate-sl). ## Dataset Description - **Homepage:** http://hdl.handle.net/11356/1433 - **Repository:** http://hdl.handle.net/11356/1433 - **Paper:** https://arxiv.org/abs/1906.02045 - **Project page** https://nl.ijs.si/frenk/ ## Description of the original dataset >The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments. > >The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments. For this dataset only the Croatian data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split). Test segment has been preserved in its original form. ## Usage in `Transformers` ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-hr","binary") ``` For binary classification the following encoding is used: ```python _CLASS_MAP_BINARY = { 'Acceptable': 0, 'Offensive': 1, } ``` The original labels are available if the dataset is loaded with the `multiclass` option: ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-hr","multiclass"). ``` In this case the encoding used is: ```python _CLASS_MAP_MULTICLASS = { 'Acceptable speech': 0, 'Inappropriate': 1, 'Background offensive': 2, 'Other offensive': 3, 'Background violence': 4, 'Other violence': 5, } ``` ## Data structure * `text`: text * `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic)) * `topic`: whether the text relates to lgbt or migrants hate-speech domains * `label`: label of the text instance, see above. ## Data instance ``` {'text': 'Potpisujem komentar g ankice pavicic', 'target': 'No target', 'topic': 'lgbt', 'label': 0} ``` ## Licensing information CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0 ## Citation information When using this dataset please cite the following paper: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ``` The original dataset can be cited as ``` @misc{11356/1433, title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0}, author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}}, url = {http://hdl.handle.net/11356/1433}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0}, year = {2021} } ```
classla/FRENK-hate-sl
2022-10-21T07:46:11.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:sl", "license:other", "hate-speech-detection", "offensive-language", "arxiv:1906.02045", "region:us" ]
classla
The FRENK Datasets of Socially Unacceptable Discourse in Slovene.
@misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} }
null
0
4
--- language: - sl license: - other size_categories: - 1K<n<10K task_categories: - text-classification task_ids: [] tags: - hate-speech-detection - offensive-language --- Slovenian subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [English subset](https://huggingface.co/datasets/5roop/FRENK-hate-en), [Croatian subset](https://huggingface.co/datasets/5roop/FRENK-hate-hr). ## Dataset Description - **Homepage:** http://hdl.handle.net/11356/1433 - **Repository:** http://hdl.handle.net/11356/1433 - **Paper:** https://arxiv.org/abs/1906.02045 - **Project page** https://nl.ijs.si/frenk/ ## Description of the original dataset >The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments. > >The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments. For this dataset only the Croatian data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split). ## Usage in `Transformers` ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-sl","binary") ``` For binary classification the following encoding is used: ```python _CLASS_MAP_BINARY = { 'Acceptable': 0, 'Offensive': 1, } ``` The original labels are available if the dataset is loaded with the `multiclass` option: ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-sl","multiclass"). ``` In this case the encoding used is: ```python _CLASS_MAP_MULTICLASS = { 'Acceptable speech': 0, 'Inappropriate': 1, 'Background offensive': 2, 'Other offensive': 3, 'Background violence': 4, 'Other violence': 5, } ``` ## Data structure * `text`: text * `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic)) * `topic`: whether the text relates to lgbt or migrants hate-speech domains * `label`: label of the text instance, see above. ## Data instance ``` {'text': 'Otroci so odprti in brez predsodkov.Predsodke jim vcepimo starejši,starši,družba,družina...Če otroku lepo razložimo,razume.Nikoli ni dobro,da omejujemo otroka,njegovo inteligenco in duhovnost z lastnim ne razumevanjem nečesa ali nekoga.Predsodek je miselni zapor,prepreka,da bi bili svobodni.Ljubezen je svoboda.Sem ZA spremembo zakona!Srečno :D', 'target': 'No target', 'topic': 'lgbt', 'label': 0} ``` ## Licensing information CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0 ## Citation information When using this dataset please cite the following paper: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ``` The original dataset can be cited as ``` @misc{11356/1433, title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0}, author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}}, url = {http://hdl.handle.net/11356/1433}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0}, year = {2021} } ```
classla/copa_hr
2022-10-25T07:32:15.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "language:hr", "license:cc-by-sa-4.0", "causal-reasoning", "textual-entailment", "commonsense-reasoning", "arxiv:2005.00333", "arxiv:2104.09243", "region:us" ]
classla
The COPA-HR dataset (Choice of plausible alternatives in Croatian) is a translation of the English COPA dataset (https://people.ict.usc.edu/~gordon/copa.html) by following the XCOPA dataset translation methodology (https://arxiv.org/abs/2005.00333). The dataset consists of 1000 premises (My body cast a shadow over the grass), each given a question (What is the cause?), and two choices (The sun was rising; The grass was cut), with a label encoding which of the choices is more plausible given the annotator or translator (The sun was rising). The dataset is split into 400 training samples, 100 validation samples, and 500 test samples. It includes the following features: 'premise', 'choice1', 'choice2', 'label', 'question', 'changed' (boolean).
@article{DBLP:journals/corr/abs-2104-09243, author = {Nikola Ljubesic and Davor Lauc}, title = {BERTi{\'{c}} - The Transformer Language Model for Bosnian, Croatian, Montenegrin and Serbian}, journal = {CoRR}, volume = {abs/2104.09243}, year = {2021}, url = {https://arxiv.org/abs/2104.09243}, archivePrefix = {arXiv}, }
null
0
4
--- language: - hr license: - cc-by-sa-4.0 task_categories: - text-classification task_ids: - natural-language-inference tags: - causal-reasoning - textual-entailment - commonsense-reasoning --- The COPA-HR dataset (Choice of plausible alternatives in Croatian) is a translation of the English COPA dataset (https://people.ict.usc.edu/~gordon/copa.html) by following the XCOPA dataset translation methodology (https://arxiv.org/abs/2005.00333). The dataset consists of 1000 premises (My body cast a shadow over the grass), each given a question (What is the cause?), and two choices (The sun was rising; The grass was cut), with a label encoding which of the choices is more plausible given the annotator or translator (The sun was rising). The dataset is split into 400 training samples, 100 validation samples, and 500 test samples. It includes the following features: 'premise', 'choice1', 'choice2', 'label', 'question', 'changed' (boolean). If you use the dataset in your work, please cite ``` @article{DBLP:journals/corr/abs-2104-09243, author = {Nikola Ljube\\\\v{s}i\\\\'{c} and Davor Lauc}, title = {BERTi{\\\\'{c}} - The Transformer Language Model for Bosnian, Croatian, Montenegrin and Serbian}, journal = {CoRR}, volume = {abs/2104.09243}, year = {2021}, url = {https://arxiv.org/abs/2104.09243}, archivePrefix = {arXiv}, } ```
classla/reldi_sr
2022-10-25T07:30:33.000Z
[ "task_categories:other", "task_ids:lemmatization", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "language:sr", "license:cc-by-sa-4.0", "structure-prediction", "normalization", "tokenization", "region:us" ]
classla
The dataset contains 5462 training samples, 711 validation samples and 725 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent_id'), list of tokens ('tokens'), list of lemmas ('lemmas'), list of UPOS tags ('upos_tags'), list of Multext-East tags ('xpos_tags), list of morphological features ('feats'), and list of IOB tags ('iob_tags'), which are encoded as class labels.
null
null
0
4
--- language: - sr license: - cc-by-sa-4.0 task_categories: - other task_ids: - lemmatization - named-entity-recognition - part-of-speech tags: - structure-prediction - normalization - tokenization --- This dataset is based on 3,748 Serbian tweets that were segmented into sentences, tokens, and annotated with normalized forms, lemmas, MULTEXT-East tags (XPOS), UPOS tags and morphological features, and named entities. The dataset contains 5462 training samples (sentences), 711 validation samples and 725 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), list of tokens ('tokens'), list of normalised tokens ('norms'), list of lemmas ('lemmas'), list of UPOS tags ('upos\_tags'), list of MULTEXT-East tags ('xpos\_tags), list of morphological features ('feats'), and list of named entity IOB tags ('iob\_tags'), which are encoded as class labels. If you are using this dataset in your research, please cite the following paper: ``` @article{Miličević_Ljubešić_2016, title={Tviterasi, tviteraši or twitteraši? Producing and analysing a normalised dataset of Croatian and Serbian tweets}, volume={4}, url={https://revije.ff.uni-lj.si/slovenscina2/article/view/7007}, DOI={10.4312/slo2.0.2016.2.156-188}, number={2}, journal={Slovenščina 2.0: empirical, applied and interdisciplinary research}, author={Miličević, Maja and Ljubešić, Nikola}, year={2016}, month={Sep.}, pages={156–188} } ```
collectivat/tv3_parla
2022-12-12T09:01:48.000Z
[ "task_categories:automatic-speech-recognition", "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ca", "license:cc-by-nc-4.0", ...
collectivat
This corpus includes 240 hours of Catalan speech from broadcast material. The details of segmentation, data processing and also model training are explained in Külebi, Öktem; 2018. The content is owned by Corporació Catalana de Mitjans Audiovisuals, SA (CCMA); we processed their material and hereby making it available under their terms of use. This project was supported by the Softcatalà Association.
@inproceedings{kulebi18_iberspeech, author={Baybars Külebi and Alp Öktem}, title={{Building an Open Source Automatic Speech Recognition System for Catalan}}, year=2018, booktitle={Proc. IberSPEECH 2018}, pages={25--29}, doi={10.21437/IberSPEECH.2018-6} }
null
3
4
--- annotations_creators: - found language_creators: - found language: - ca license: - cc-by-nc-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - automatic-speech-recognition - text-generation task_ids: - language-modeling pretty_name: TV3Parla --- # Dataset Card for TV3Parla ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://collectivat.cat/asr#tv3parla - **Repository:** - **Paper:** [Building an Open Source Automatic Speech Recognition System for Catalan](https://www.isca-speech.org/archive/iberspeech_2018/kulebi18_iberspeech.html) - **Point of Contact:** [Col·lectivaT](mailto:info@collectivat.cat) ### Dataset Summary This corpus includes 240 hours of Catalan speech from broadcast material. The details of segmentation, data processing and also model training are explained in Külebi, Öktem; 2018. The content is owned by Corporació Catalana de Mitjans Audiovisuals, SA (CCMA); we processed their material and hereby making it available under their terms of use. This project was supported by the Softcatalà Association. ### Supported Tasks and Leaderboards The dataset can be used for: - Language Modeling. - Automatic Speech Recognition (ASR) transcribes utterances into words. ### Languages The dataset is in Catalan (`ca`). ## Dataset Structure ### Data Instances ``` { 'path': 'tv3_0.3/wav/train/5662515_1492531876710/5662515_1492531876710_120.180_139.020.wav', 'audio': {'path': 'tv3_0.3/wav/train/5662515_1492531876710/5662515_1492531876710_120.180_139.020.wav', 'array': array([-0.01168823, 0.01229858, 0.02819824, ..., 0.015625 , 0.01525879, 0.0145874 ]), 'sampling_rate': 16000}, 'text': 'algunes montoneres que que et feien anar ben col·locat i el vent també hi jugava una mica de paper bufava vent de cantó alguns cops o de cul i el pelotón el vent el porta molt malament hi havia molts nervis' } ``` ### Data Fields - `path` (str): Path to the audio file. - `audio` (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `text` (str): Transcription of the audio file. ### Data Splits The dataset is split into "train" and "test". | | train | test | |:-------------------|-------:|-----:| | Number of examples | 159242 | 2220 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/). ### Citation Information ``` @inproceedings{kulebi18_iberspeech, author={Baybars Külebi and Alp Öktem}, title={{Building an Open Source Automatic Speech Recognition System for Catalan}}, year=2018, booktitle={Proc. IberSPEECH 2018}, pages={25--29}, doi={10.21437/IberSPEECH.2018-6} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
crystina-z/msmarco-passage
2022-02-28T19:13:53.000Z
[ "region:us" ]
crystina-z
null
@misc{bajaj2018ms, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song and Alina Stoica and Saurabh Tiwary and Tong Wang}, year={2018}, eprint={1611.09268}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
1
4
Entry not found
espejelomar/code_search_net_python_10000_examples
2022-02-20T03:42:13.000Z
[ "license:cc", "region:us" ]
espejelomar
null
null
null
8
4
--- license: cc ---
gfigueroa/wikitext_processed
2022-01-19T18:16:40.000Z
[ "region:us" ]
gfigueroa
null
null
null
0
4
Entry not found
gsarti/change_it
2022-10-27T08:37:09.000Z
[ "task_categories:summarization", "task_categories:text-generation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:it", "license:cc-by-nc-sa-4.0", "conditional-text-generation", "sty...
gsarti
The CHANGE-IT dataset contains approximately 152,000 article-headline pairs, collected from two Italian newspapers situated at opposite ends of the political spectrum, namely la Repubblica (left) and Il Giornale (right), with the two newspapers equally represented. The dataset has been used in the context of the CHANGE-IT task (https://sites.google.com/view/change-it) during the Evalita 2020 evaluation campaign (http://www.evalita.it/2020). CHANGE-IT is a generation task for Italian – more specifically, a style transfer task for headlines of Italian newspapers. Given a (collection of) headlines from one newspaper, namely Il Giornale (G) or La Repubblica (R), it challenges automatic systems to change all G-headlines to headlines in style R, and all R-headlines to headlines in style G. Although the task only concerns headline change, the dataset comprehends both the headlines as well as their respective full articles.
@inproceedings{demattei-etal-2020-changeit, author = {De Mattei, Lorenzo and Cafagna, Michele and Dell'Orletta, Felice and Nissim, Malvina and Gatt, Albert}, title = {{CHANGE-IT @ EVALITA 2020}: Change Headlines, Adapt News, GEnerate}, booktitle = {Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020)}, editor = {Basile, Valerio and Croce, Danilo and Di Maro, Maria, and Passaro, Lucia C.}, publisher = {CEUR.org}, year = {2020}, address = {Online} }
null
1
4
--- annotations_creators: - no-annotation language_creators: - found language: - it license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - unknown source_datasets: - original task_categories: - summarization - text-generation task_ids: [] pretty_name: change-it tags: - conditional-text-generation - style-transfer --- # Dataset Card for CHANGE-IT ## Table of Contents - [Dataset Card for CHANGE-IT](#dataset-card-for-change-it) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Style Transfer](#style-transfer) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [https://live.european-language-grid.eu/catalogue/corpus/7373](https://live.european-language-grid.eu/catalogue/corpus/7373) - **Repository:** [Github](https://github.com/michelecafagna26/CHANGE-IT) - **Paper:** [CEUR-ws.org](http://ceur-ws.org/Vol-2765/paper169.pdf) - **Video** [Vimeo](https://vimeo.com/484098874) - **Point of Contact:** [Lorenzo De Mattei](lorenzo.demattei@gmail.com) - **Size of downloaded dataset files:** 168.7 MB - **Size of the generated dataset:** 411 MB - **Total amount of disk used:** 579.7 MB ### Dataset Summary The CHANGE-IT dataset contains approximately 152,000 article-headline pairs, collected from two Italian newspapers situated at opposite ends of the political spectrum, namely la Repubblica (left) and Il Giornale (right), with the two newspapers equally represented. The dataset has been used in the context of the [CHANGE-IT task](https://sites.google.com/view/change-it) during the [Evalita 2020 evaluation campaign](http://www.evalita.it/2020). CHANGE-IT is a generation task for Italian – more specifically, a style transfer task for headlines of Italian newspapers. Given a (collection of) headlines from one newspaper, namely Il Giornale (G) or La Repubblica (R), it challenges automatic systems to change all G-headlines to headlines in style R, and all R-headlines to headlines in style G. Although the task only concerns headline change, the dataset comprehends both the headlines as well as their respective full articles. **Disclaimer**: *The CHANGE-IT dataset is hosted by the [European Language Grid](https://live.european-language-grid.eu/) and licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). To use the dataset using* 🤗 *Datasets, download and unzip the folder from its [ELG page](https://live.european-language-grid.eu/catalogue/corpus/7373) and pass it to the* `load_dataset` *method as:* `datasets.load_dataset('gsarti/change_it', data_dir='path/to/unzipped/folder')` ### Supported Tasks and Leaderboards #### Style Transfer The following table is taken from Table 4 of the original paper, where a *pointer-network* architecture is used as a baseline to perform style transfer in two settings. In the **rep2gio** variant the system is trained to summarize Repubblica headlines from full texts (vice versa for **gio2rep**), and the style transfer is performed by summarizing full texts of the other newspaper in the source newspaper's headline style. **avg** is the average of the two settings. | | HH| AH|Main|Compliancy| |--------:|---:|---:|---:|---------:| |`rep2gio`|.649|.876|.799| .449| |`gio2rep`|.639|.871|.435| .240| | `avg`|.644|.874|.616| .345| Here **Main**, **HH** and **AH** are all BERT-base models trained to evaluate the quality of style transfer as follows: - **Main**: the model is trained to classify a generated headline either as `ilgiornale` or `repubblica`, achieving ~80% F1 score on gold data. Tests whether the transfer has been successful. - **Headline-Headline (HH)**: the model is trained to check the compatibility between original and generated headlines. Tests whether the generation is coherent with the reference. - **Article-Headline (AH)**: the model is trained to check the compatibility between original fulltext article and generated headlines. Tests whether the generation is coherent with the source article. The final metric, **Overall compliancy**, is a binary metric that is positive if the other three metrics match (**Main** decision is reversed, **HH** and **AH** predict match), and negative otherwise. Refer to Section 3 of the original paper for more details. ### Languages The language data in CHANGE-IT is in Italian (BCP-47 `it`) ## Dataset Structure ### Data Instances A sample from the `test` split of the `ilgiornale` config is provided below. The other configuration, `ilgiornale`, has the same structure. ```json { "id": 0, "headline": "Ucraina, coalizione della Timoshenko denuncia irruzione nella sede", "full_text": "Rimane alta la tensione in Ucraina , dove da giorni i manifestanti scendono in piazza per protestare contro la decisione del presidente Viktor Yanukovich, che ha deciso di congelare l'accordo di associazione con l'Unione Europea. Il momento è molto delicato. L'opposizione teme una repressione violenza della protesta, con le forze speciali che hanno costretto i manifestanti a Kiev ad allontanarsi dalla sede del governo, per ripiegare su piazza Indipendenza. Il leader d'opposizione Vitaly Klitschko ha invitato il presidente a non utilizzare la forza, se non vuole avere il sangue dei manifestanti sulle sue mani. Nel frattempo il presidente Yanukovich ha aperto alla possibilità di un dialogo, annunciando per domani un incontro con i suoi due predecessori, Leonid Kuchma e Viktor Yushchenko. Ieri un milioni di persone sono scese in piazza, scaduti i due giorni di ultimatum dati al governo per indire nuove elezioni, I manifestanti hanno rovesciato la grande statua di Lenin posta sul boulevard Shevchenko. Piazza Indipendenza (Maidan Nezalezhnosti) resta il punto più caldo della capitale. Qui sono state erette barricate davanti agli ingressi della metropolitana, nel tentativo di preparsi a un'azione della polizia, che al momento non ha però preso iniziative contro i dimostranti. In serata Batkivshcyna, la coalizione dell'ex premier Yulia Timoshenko , ha denunciato l'irruzione di almeno venti agenti della polizia antisommossa nel proprio quartier generale. Il portavoce della polizia, Olga Bilyk, ha smentito: \"Né la polizia di Kiev, né la Berkut - ha dichiarato - hanno condotto operazioni nella sede\".", "alignment": "A2" } ``` The text is provided as-is, without further preprocessing or tokenization. ### Data Fields - `headline`: The original headline for the newspaper. - `full_text`: The article full text associated to the respective headline. - `alignment`: The alignment value used for the style transfer experiments. Values: - `A1`: Top 5K pairs, highly aligned. - `A2`: Test set, highly aligned. - `A3`: 10K to 20K pairs, fairly aligned. - `R`: Bottom ~50K pairs, weakly/not aligned. ### Data Splits | config| train| test| |---------:|-------------------------------------:|-----------:| |`ilgiornale`|5'000 (A1) + 10'000 (A3) + 48'701 (R) | 5'000 (A2) | |`repubblica`|5'000 (A1) + 10'000 (A3) + 48'701 (R) | 5'000 (A2) | ### Dataset Creation Please refer to the original article [CHANGE-IT @ EVALITA 2020: Change Headlines, Adapt News, GEnerate](http://ceur-ws.org/Vol-2765/paper169.pdf) for additional information on dataset creation. ## Additional Information ### Dataset Curators The organizers of the CHANGE-IT shared tasks are the curators of the original dataset. For problems or updates on the 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com). ### Licensing Information Licensed with Creative Commons Attribution Non Commercial Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information Please cite the authors if you use these corpora in your work: ``` @inproceedings{demattei-etal-2020-changeit, author = {De Mattei, Lorenzo and Cafagna, Michele and Dell'Orletta, Felice and Nissim, Malvina and Gatt, Albert}, title = {{CHANGE-IT @ EVALITA 2020}: Change Headlines, Adapt News, GEnerate}, booktitle = {Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020)}, editor = {Basile, Valerio and Croce, Danilo and Di Maro, Maria, and Passaro, Lucia C.}, publisher = {CEUR.org}, year = {2020}, address = {Online} }
huggingartists/oxxxymiron
2022-10-25T09:40:52.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
4
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/oxxxymiron" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 2.070318 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/57ecbbdaf70c671be2d8b7bd39112db0.1000x1000x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/oxxxymiron"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">Oxxxymiron</div> <a href="https://genius.com/artists/oxxxymiron"> <div style="text-align: center; font-size: 14px;">@oxxxymiron</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/oxxxymiron). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/oxxxymiron") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |210| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/oxxxymiron") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/the-beatles
2022-10-25T09:46:31.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
4
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/the-beatles" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 1.07072 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/df75ede64ffcf049727bfbb01d323081.400x400x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/the-beatles"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">The Beatles</div> <a href="https://genius.com/artists/the-beatles"> <div style="text-align: center; font-size: 14px;">@the-beatles</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/the-beatles). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/the-beatles") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |878| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/the-beatles") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
kevinjesse/ManyTypes4TypeScript
2022-10-22T08:35:33.000Z
[ "annotations_creators:found", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:code", "license:cc-by-4.0", "region:us" ]
kevinjesse
null
null
null
1
4
--- license: - cc-by-4.0 annotations_creators: - found - machine-generated language_creators: - found language: - code language_details: TypeScript multilinguality: - monolingual size_categories: - 10M<n<100M source_datasets: - original task_categories: - structure-prediction task_ids: - type-inference pretty_name: ManyTypes4TypeScript --- # Models Trained On ManyTypes4TypeScript - **[CodeBERT]**(https://huggingface.co/kevinjesse/codebert-MT4TS) - **[GraphCodeBERT]**(https://huggingface.co/kevinjesse/graphcodebert-MT4TS) - **[CodeBERTa]**(https://huggingface.co/kevinjesse/codeberta-MT4TS) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Dataset:** [https://doi.org/10.5281/zenodo.6387001](https://doi.org/10.5281/zenodo.6387001) - **PapersWithCode:** [https://paperswithcode.com/sota/type-prediction-on-manytypes4typescript](https://paperswithcode.com/sota/type-prediction-on-manytypes4typescript) ### Dataset Summary ManyTypes4TypeScript type inference dataset, available at the DOI link below. [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6387001.svg)](https://doi.org/10.5281/zenodo.6387001) Given a line of source code, the task is to identify types that correspond with the tokens of code. We treat this as a tagging task similar to NER and POS where the model must predict a structural property of code i.e types. This is a classification task where the labels are the top occurring types in the training dataset. The size type vocabulary can be changed with the scripts found on Github. ### Supported Tasks and Leaderboards - `multi-class-classification`: The dataset can be used to train a model for predicting types across a sequence. ### Languages - TypeScript ## Dataset Structure ### Data Instances An example of 'validation' looks as follows. ``` { "tokens": ["import", "{", "Component", ",", "ChangeDetectorRef", "}", "from", "'@angular/core'", ";", "import", "{", "Router", "}", "from", "'@angular/router'", ";", "import", "{", "MenuController", "}", "from", "'@ionic/angular'", ";", "import", "{", "Storage", "}", "from", "'@ionic/storage'", ";", "import", "Swiper", "from", "'swiper'", ";", "@", "Component", "(", "{", "selector", ":", "'page-tutorial'", ",", "templateUrl", ":", "'tutorial.html'", ",", "styleUrls", ":", "[", "'./tutorial.scss'", "]", ",", "}", ")", "export", "class", "TutorialPage", "{", "showSkip", "=", "true", ";", "private", "slides", ":", "Swiper", ";", "constructor", "(", "public", "menu", ",", "public", "router", ",", "public", "storage", ",", "private", "cd", ")", "{", "}", "startApp", "(", ")", "{", "this", ".", "router", ".", "navigateByUrl", "(", "'/app/tabs/schedule'", ",", "{", "replaceUrl", ":", "true", "}", ")", ".", "then", "(", "(", ")", "=>", "this", ".", "storage", ".", "set", "(", "'ion_did_tutorial'", ",", "true", ")", ")", ";", "}", "setSwiperInstance", "(", "swiper", ")", "{", "this", ".", "slides", "=", "swiper", ";", "}", "onSlideChangeStart", "(", ")", "{", "this", ".", "showSkip", "=", "!", "this", ".", "slides", ".", "isEnd", ";", "this", ".", "cd", ".", "detectChanges", "(", ")", ";", "}", "ionViewWillEnter", "(", ")", "{", "this", ".", "storage", ".", "get", "(", "'ion_did_tutorial'", ")", ".", "then", "(", "res", "=>", "{", "if", "(", "res", "===", "true", ")", "{", "this", ".", "router", ".", "navigateByUrl", "(", "'/app/tabs/schedule'", ",", "{", "replaceUrl", ":", "true", "}", ")", ";", "}", "}", ")", ";", "this", ".", "menu", ".", "enable", "(", "false", ")", ";", "}", "ionViewDidLeave", "(", ")", "{", "this", ".", "menu", ".", "enable", "(", "true", ")", ";", "}", "}"], "labels": [null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "MenuController", null, null, "Router", null, null, "Storage", null, null, "ChangeDetectorRef", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "Swiper", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null], "url": "https://github.com/ionic-team/ionic-conference-app", "path": "ionic-conference-app/src/app/pages/tutorial/tutorial.ts", "commit_hash": "34d97d29369377a2f0173a2958de1ee0dadb8a6e", "file": "tutorial.ts"} } ``` ### Data Fields The data fields are the same among all splits. #### default |field name. | type | description | |------------|-------------|--------------------------------------------| |tokens |list[string] | Sequence of tokens (word tokenization) | |labels |list[string] | A list of corresponding types | |url |string | Repository URL | |path |string | Original file path that contains this code | |commit_hash |string | Commit identifier in the original project | |file |string | File name | ### Data Splits | name | train |validation| test | |---------:|---------:|---------:|--------:| |projects | 75.00% | 12.5% | 12.5% | |files | 90.53% | 4.43% | 5.04% | |sequences | 91.95% | 3.71% | 4.34% | |types | 95.33% | 2.21% | 2.46% | ##Types by the Numbers ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations Human annotated types in optionally typed languages and the compiler inferred annotations. #### Annotation process #### Who are the annotators? Developers and TypeScript Compiler. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/kevinjesse ### Licensing Information Creative Commons 4.0 (CC) license ### Citation Information ``` ```
kresnik/librispeech_asr_test
2022-01-18T15:51:58.000Z
[ "region:us" ]
kresnik
\ LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .flac format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
\ @inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} }
null
2
4
Entry not found
lijingxin/squad_zen
2022-02-09T03:05:31.000Z
[ "region:us" ]
lijingxin
null
null
null
1
4
仅自用 出自:https://github.com/junzeng-pluto/ChineseSquad 感谢!
midas/kptimes
2022-02-06T06:21:58.000Z
[ "region:us" ]
midas
\
@inproceedings{gallina2019kptimes, title={KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents}, author={Gallina, Ygor and Boudin, Florian and Daille, B{\'e}atrice}, booktitle={Proceedings of the 12th International Conference on Natural Language Generation}, pages={130--135}, year={2019} }
null
0
4
A dataset for benchmarking keyphrase extraction and generation techniques from news. For more details about the dataset please refer the original paper - [https://aclanthology.org/W19-8617.pdf](https://aclanthology.org/W19-8617.pdf) Original source of the data - [https://github.com/ygorg/KPTimes](https://github.com/ygorg/KPTimes) ## Dataset Summary <br> <p align="center"> <img src="https://huggingface.co/datasets/midas/kptimes/resolve/main/kptimes-details.png" alt="KPTimes dataset summary" width="90%"/> <br> </p> <br> KPTimes is a large scale dataset comprising of 279,923 news articles from NY Times and 10K from JPTimes. It is one of the datasets which has annotations of keyphrases curated by the editors who can be considered as experts. The motivation of the authors behind producing this dataset was to have a large dataset for training neural models for keyphrase generation in a domain other than the scientific domain, and to understand the differences between keyphrases annotated by experts and non-experts. The authors show that the editors tend to assign generic keyphrases that are not present in the actual news article's text, with 55% of them being abstractive keyphrases. The keyphrases in the news domain as presented in this work were also on an average shorter (1.4 words) than those in the scientific datasets (2.4 words). The dataset is randomly divided into train (92.8%), validation (3.6%) and test (3.6%) splits. In order to enable the models trained on this dataset to generalize well the authors did not want to have the entire data taken from a single source (NY Times), and therefore added 10K more articles from JPTimes dataset. The authors collected free to read article URLs from NY Times spanning from 2006 to 2017, and obtained their corresponding HTML pages from the Internet Archive. They cleaned the HTML tags and extracted the title, and the main content of the articles using heuristics. The gold keyphrases were obtained from the metadata fields - *news_keywords* and *keywords*. The documents in the dataset are full-length news articles, which also makes it a suitable dataset for developing models for identifying keyphrases from long documents. <br> <p align="center"> <img src="https://huggingface.co/datasets/midas/kptimes/resolve/main/KPTimesExample.png" alt="KPTimes sample" width="90%"/> <br> </p> <br> ## Dataset Structure ## Dataset Statistics Table 1: Statistics on the length of the abstractive keyphrases for Train, Test, and Validation splits of Inspec dataset. | | Train | Test | Validation | |:------------------: |:-------: |:-------: |:----------: | | Single word | 15.6% | 29.59% | 15.52% | | Two words | 36.7% | 36.88% | 12.38% | | Three words | 29.5% | 20.86% | 29.29% | | Four words | 12.5% | 8.88% | 0% | | Five words | 3.4% | 2.33% | 3.50% | | Six words | 1.4% | 0.93% | 1.38% | | Seven words | 0.4% | 0.27% | 0.37% | | Eight words | 0.24% | 0.13% | 0.21% | | Nine words | 0.14% | 0.013% | 0.10% | | Ten words | 0.02% | 0.0007% | 0.03% | | Eleven words | 0.01% | 0.01% | 0.003% | | Twelve words | 0.008% | 0.011% | 0.007% | | Thirteen words | 0.01% | 0.02% | 0.02% | | Fourteen words | 0.001% | 0% | 0% | | Fifteen words | 0.001% | 0.004% | 0.003% | | Sixteen words | 0.0004% | 0% | 0% | | Seventeen words | 0.0005% | 0% | 0% | | Eighteen words | 0.0004% | 0% | 0% | | Nineteen words | 0.0001% | 0% | 0% | | Twenty words | 0.0001% | 0% | 0% | | Twenty-three words | 0.0001% | 0% | 0% | Table 2: Statistics on the length of the extractive keyphrases for Train, Test, and Validation splits of Inspec dataset. | | Train | Test | Validation | |:--------------: |:-------: |:------: |:----------: | | Single word | 54.2% | 60.0% | 54.38% | | Two words | 33.9% | 32.4% | 33.73% | | Three words | 8.8% | 5.5% | 8.70% | | Four words | 1.9% | 1.04% | 1.97% | | Five words | 0.5% | 0.25% | 0.53% | | Six words | 0.4% | 0.16% | 0.44% | | Seven words | 0.12% | 0.06% | 0.15% | | Eight words | 0.05% | 0.03% | 0.08% | | Nine words | 0.009% | 0% | 0% | | Ten words | 0.0007% | 0.001% | 0% | | Eleven words | 0.0002% | 0% | 0% | | Twelve words | 0.0002% | 0% | 0% | | Thirteen words | 0.0002% | 0% | 0% || Table 3: General statistics of the Inspec dataset. | Type of Analysis | Train | Test | Validation | |:------------------------------------------------: |:---------------------: |:---------------------: |:---------------------: | | Annotator Type | Professional Indexers | Professional Indexers | Professional Indexers | | Document Type | News Articles | News Articles | News articles | | No. of Documents | 259,923 | 20,000 | 10,000 | | Avg. Document length (words) | 783.32 | 643.2 | 784.65 | | Max Document length (words) | 7278 | 5503 | 5627 | | Max no. of abstractive keyphrases in a document | 10 | 10 | 10 | | Min no. of abstractive keyphrases in a document | 0 | 0 | 0 | | Avg. no. of abstractive keyphrases per document | 2.87 | 2.30 | 2.89 | | Max no. of extractive keyphrases in a document | 10 | 10 | 9 | | Min no. of extractive keyphrases in a document | 0 | 0 | 0 | | Avg. no. of extractive keyphrases per document | 2.15 | 2.72 | 2.13 | ### Data Fields - **id**: unique identifier of the document. - **document**: Whitespace separated list of words in the document. - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all. - **extractive_keyphrases**: List of all the present keyphrases. - **abstractive_keyphrase**: List of all the absent keyphrases. - **other metadata**: Additional information present in the original dataset. - **id** : unique identifier for the document - **date** : publishing date (YYYY/MM/DD) - **categories** : categories of the article (1 or 2 categories) - **title** : title of the document - **abstract** : content of the article - **keyword** : list of keywords ### Data Splits |Split| #datapoints | |--|--| | Train | 259923 | | Test | 20000 | | Validation | 10000 | ## Usage ### Full Dataset ```python from datasets import load_dataset # get entire dataset dataset = load_dataset("midas/kptimes", "raw") # sample from the train split print("Sample from training data split") train_sample = dataset["train"][0] print("Fields in the sample: ", [key for key in train_sample.keys()]) print("Tokenized Document: ", train_sample["document"]) print("Document BIO Tags: ", train_sample["doc_bio_tags"]) print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"]) print("Other Metadata: ", train_sample["other_metadata"]) print("\n-----------\n") # sample from the validation split print("Sample from validation data split") validation_sample = dataset["validation"][0] print("Fields in the sample: ", [key for key in validation_sample.keys()]) print("Tokenized Document: ", validation_sample["document"]) print("Document BIO Tags: ", validation_sample["doc_bio_tags"]) print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"]) print("Other Metadata: ", validation_sample["other_metadata"]) print("\n-----------\n") # sample from the test split print("Sample from test data split") test_sample = dataset["test"][0] print("Fields in the sample: ", [key for key in test_sample.keys()]) print("Tokenized Document: ", test_sample["document"]) print("Document BIO Tags: ", test_sample["doc_bio_tags"]) print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"]) print("Other Metadata: ", test_sample["other_metadata"]) print("\n-----------\n") ``` **Output** ```bash Sample from training data split Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata'] Tokenized Document: ['For', 'Donald', 'Trump’s', 'Big', 'Speech,', 'an', 'Added', 'Pressure:', 'No', 'Echoes', 'CLEVELAND', '—', 'Until', 'Monday', 'night,', 'Donald', 'J.', 'Trump’s', 'biggest', 'concern', 'about', 'his', 'convention', 'speech', 'was', 'how', 'much', 'to', 'reveal', 'about', 'himself', 'and', 'his', 'family', 'in', 'an', 'address', 'that', 'is', 'often', 'the', 'most', 'personal', 'one', 'a', 'presidential', 'candidate', 'delivers.', 'But', 'the', 'political', 'firestorm', 'over', 'his', 'wife’s', 'speech', ',', 'which', 'borrowed', 'passages', 'from', 'Michelle', 'Obama’s', 'convention', 'remarks', 'in', '2008,', 'raised', 'the', 'stakes', 'exponentially.', 'Mr.', 'Trump’s', 'speech', 'on', 'Thursday', 'night', 'cannot', 'merely', 'be', 'his', 'best', 'ever.', 'It', 'also', 'has', 'to', 'be', 'bulletproof.', 'By', 'Tuesday', 'morning,', 'word', 'had', 'spread', 'throughout', 'his', 'campaign', 'that', 'any', 'language', 'in', 'Mr.', 'Trump’s', 'address', 'even', 'loosely', 'inspired', 'by', 'speeches,', 'essays,', 'books', 'or', 'Twitter', 'posts', 'had', 'to', 'be', 'either', 'rewritten', 'or', 'attributed.', 'Mr.', 'Trump’s', 'chief', 'speechwriter,', 'Stephen', 'Miller,', 'reassured', 'colleagues', 'that', 'the', 'acceptance', 'speech', 'was', 'wholly', 'original,', 'according', 'to', 'two', 'staff', 'members', 'who', 'spoke', 'with', 'him', 'and', 'described', 'those', 'conversations', 'on', 'the', 'condition', 'of', 'anonymity.', 'Mr.', 'Miller', 'also', 'told', 'campaign', 'aides', 'that', 'he', 'had', 'looked', 'closely', 'at', 'passages', 'that', 'Mr.', 'Trump', 'had', 'contributed', '—', 'handwritten', 'on', 'unlined', 'white', 'pages', '—', 'and', 'was', 'confident', 'they', 'contained', 'no', 'problems.', '(Mr.', 'Miller', 'declined', 'an', 'interview', 'request.)', 'Even', 'so,', 'one', 'of', 'the', 'staff', 'members', 'downloaded', 'plagiarism-detection', 'software', 'and', 'ran', 'a', 'draft', 'of', 'the', 'speech', 'through', 'the', 'program.', 'No', 'red', 'flags', 'came', 'up.', 'The', 'intense', 'scrutiny', 'of', 'Mr.', 'Trump’s', 'words', 'added', 'new', 'pressure', 'to', 'a', 'speechwriting', 'process', 'that', 'has', 'been', 'one', 'of', 'the', 'most', 'unpredictable', 'and', 'free-form', 'in', 'modern', 'presidential', 'campaigns.', 'A', 'month', 'ago,', 'Mr.', 'Trump', 'began', 'giving', 'dictation', 'on', 'themes', 'for', 'the', 'speech,', 'and', 'he', 'tossed', 'ideas', 'and', 'phrases', 'to', 'Mr.', 'Miller', 'or', 'other', 'advisers', 'on', 'a', 'daily', 'basis.', 'On', 'printed', 'copies', 'of', 'each', 'draft,', 'he', 'circled', 'passages', 'he', 'liked,', 'crossed', 'out', 'or', 'put', 'question', 'marks', 'beside', 'lines', 'that', 'he', 'did', 'not', 'favor', 'and', 'frequently', 'suggested', 'new', 'words', 'or', 'phrases.', 'Image', 'Stephen', 'Miller,', 'left,', 'Mr.', 'Trump’s', 'chief', 'speechwriter,', 'and', 'Paul', 'Manafort,', 'the', 'campaign', 'chairman,', 'before', 'an', 'event', 'for', 'the', 'candidate', 'at', 'the', 'Trump', 'SoHo', 'hotel', 'in', 'New', 'York', 'last', 'month.', 'Credit', 'Damon', 'Winter/The', 'New', 'York', 'Times', '“I’ve', 'been', 'amending', 'the', 'drafts', 'big-league,”', 'Mr.', 'Trump', 'said', 'in', 'an', 'interview', 'in', 'his', 'Manhattan', 'office', 'before', 'the', 'convention.', '“I', 'get', 'ideas', 'from', 'a', 'lot', 'of', 'different', 'places,', 'a', 'lot', 'of', 'smart', 'people,', 'but', 'mostly', 'I', 'like', 'language', 'that', 'sounds', 'like', 'me.”', 'Yet', 'in', 'the', 'aftermath', 'of', 'Melania', 'Trump’s', 'speech,', 'campaign', 'advisers', 'have', 'fretted', 'that', 'they', 'do', 'not', 'know', 'for', 'sure', 'where', 'Mr.', 'Trump', 'gets', 'his', 'ideas', 'and', 'language', '—', 'whether', 'they', 'are', 'his', 'own,', 'in', 'other', 'words,', 'or', 'are', 'picked', 'up', 'from', 'Twitter,', 'television,', 'or,', 'say,', 'a', 'best', 'seller', 'by', 'Bill', 'O’Reilly', 'of', 'Fox', 'News,', 'a', 'commentator', 'whom', 'Mr.', 'Trump', 'likes.', 'Borrowing', 'or', 'adapting', 'may', 'not', 'always', 'be', 'tantamount', 'to', 'plagiarism,', 'but', 'several', 'Trump', 'advisers,', 'who', 'also', 'insisted', 'on', 'anonymity,', 'said', 'that', 'after', 'the', 'furor', 'over', 'Ms.', 'Trump’s', 'remarks,', 'the', 'campaign', 'cannot', 'allow', 'a', 'similar', 'blowup.', 'Ed', 'Rollins,', 'a', 'Republican', 'strategist', 'who', 'is', 'advising', 'a', '“super', 'PAC”', 'supporting', 'Mr.', 'Trump,', 'said', 'that', 'the', 'candidate', 'could', 'not', 'afford', 'any', 'mistakes.', '“His', 'speech', 'is', 'the', 'whole', 'game,”', 'Mr.', 'Rollins', 'said.', '“Viewers', 'have', 'to', 'watch', 'it', 'and', 'say,', '‘There', 'is', 'the', 'next', 'president', 'of', 'the', 'United', 'States.’”', 'In', 'the', 'interview,', 'Mr.', 'Trump', 'said', 'his', 'speech', 'would', 'center', 'on', 'his', 'vision', 'of', 'a', 'strong', 'and', 'secure', 'America', 'that', '“once', 'existed', 'and', 'no', 'longer', 'does,', 'but', 'can', 'again', 'under', 'a', 'Trump', 'administration.”', 'Latest', 'Election', 'Polls', '2016', 'Get', 'the', 'latest', 'national', 'and', 'state', 'polls', 'on', 'the', 'presidential', 'election', 'between', 'Hillary', 'Clinton', 'and', 'Donald', 'J.', 'Trump.', 'His', 'greatest', 'challenge,', 'he', 'said,', 'was', '“putting', 'myself', 'in', 'the', 'speech”', '—', 'discussing', 'his', 'upbringing', 'and', 'early', 'experiences', 'and', 'relating', 'them', 'to', 'the', 'hopes', 'and', 'aspirations', 'of', 'other', 'Americans.', '“I', 'was', 'never', 'comfortable', 'getting', 'personal', 'about', 'my', 'family', 'because', 'I', 'thought', 'it', 'was', 'special', 'territory,”', 'Mr.', 'Trump', 'said,', 'glancing', 'at', 'a', 'picture', 'of', 'his', 'father', 'on', 'his', 'desk.', '“It', 'can', 'feel', 'exploitative', 'to', 'use', 'family', 'stories', 'to', 'win', 'votes.', 'And', 'I', 'had', 'a', 'very', 'happy', 'and', 'comfortable', 'life', 'growing', 'up.', 'I', 'had', 'a', 'great', 'relationship', 'with', 'my', 'father.', 'But', 'my', 'focus', 'needs', 'to', 'be', 'on', 'all', 'the', 'Americans', 'who', 'are', 'struggling.”', 'He', 'said', 'he', 'was', 'unsure', 'if', 'he', 'would', 'discuss', 'his', 'older', 'brother', 'Fred,', 'who', 'died', 'as', 'an', 'alcoholic', 'in', '1981', 'at', '43', '—', 'and', 'whom', 'he', 'has', 'described', 'as', 'an', 'example', 'of', 'how', 'destructive', 'choices', 'can', 'damage', 'lives', 'that', 'seem', 'golden.', '“Without', 'my', 'brother', 'Fred', 'I', 'might', 'not', 'be', 'here,”', 'Mr.', 'Trump', 'said.', '“He', 'was', 'really', 'smart,', 'great-looking.', 'I', 'don’t', 'drink', 'or', 'smoke', 'because', 'of', 'what', 'happened', 'to', 'him.', 'I', 'focused', 'on', 'building', 'my', 'business', 'and', 'making', 'good', 'choices.', 'I', 'may', 'talk', 'about', 'that,', 'but', 'I', 'don’t', 'know', 'if', 'I', 'should.”', 'Acceptance', 'speeches', 'seldom', 'seem', 'complete', 'without', 'anecdotes', 'about', 'personal', 'trials', 'and', 'triumphs:', 'Mitt', 'Romney,', 'trying', 'to', 'persuade', 'voters', 'to', 'see', 'him', 'as', 'more', 'than', 'a', 'rich', 'businessman,', 'devoted', 'about', 'a', 'fourth', 'of', 'his', '2012', 'address', 'to', 'his', 'parents’', 'unconditional', 'love,', 'his', 'Mormon', 'faith', 'and', 'reminiscences', 'about', 'watching', 'the', 'moon', 'landing.', 'In', '2008', ',', 'Barack', 'Obama', 'described', 'how', 'his', 'grandfather', 'benefited', 'from', 'the', 'G.I.', 'Bill', 'and', 'how', 'his', 'mother', 'and', 'grandmother', 'taught', 'him', 'the', 'value', 'of', 'hard', 'work.', 'And', 'Bill', 'Clinton’s', '1992', 'speech', 'vividly', 'recalled', 'the', 'life', 'lessons', 'he', 'learned', 'from', 'his', 'mother', 'about', 'fighting', 'and', 'working', 'hard,', 'from', 'his', 'grandfather', 'about', 'racial', 'equality', '—', 'and', 'from', 'his', 'wife,', 'Hillary,', 'who,', 'Mr.', 'Clinton', 'said,', 'taught', 'him', 'that', 'every', 'child', 'could', 'learn.', 'Mr.', 'Clinton', 'finished', 'his', 'speech', 'with', 'a', 'now-famous', 'line', 'tying', 'his', 'Arkansas', 'hometown', 'to', 'the', 'American', 'dream.', '“I', 'end', 'tonight', 'where', 'it', 'all', 'began', 'for', 'me,”', 'he', 'said.', '“I', 'still', 'believe', 'in', 'a', 'place', 'called', 'Hope.”', 'James', 'Carville,', 'a', 'senior', 'strategist', 'for', 'Mr.', 'Clinton’s', '1992', 'campaign,', 'said', 'that', 'if', 'Mr.', 'Trump', 'hoped', 'to', 'change', 'the', 'minds', 'of', 'those', 'who', 'see', 'him', 'as', 'divisive', 'or', 'bigoted,', 'he', 'would', 'need', 'to', 'open', 'himself', 'up', 'to', 'voters', 'in', 'meaningfully', 'personal', 'ways', 'in', 'his', 'speech.', '“If', 'he’s', 'really', 'different', 'than', 'the', 'way', 'he', 'seems', 'in', 'television', 'interviews', 'or', 'at', 'his', 'rallies,', 'Thursday’s', 'speech', 'will', 'be', 'his', 'single', 'greatest', 'opportunity', 'to', 'show', 'voters', 'who', 'he', 'really', 'is,”', 'Mr.', 'Carville', 'said.', 'Paul', 'Manafort,', 'the', 'Trump', 'campaign', 'chairman,', 'said', 'that', 'Thursday’s', 'speech', 'would', 'be', '“very', 'much', 'a', 'reflection', 'of', 'Mr.', 'Trump’s', 'own', 'words,', 'as', 'opposed', 'to', 'remarks', 'that', 'others', 'create', 'and', 'the', 'campaign', 'puts', 'in', 'his', 'mouth.”', '“He’s', 'not', 'an', 'editor', '—', 'he', 'is', 'actually', 'the', 'creator', 'of', 'the', 'speech,”', 'Mr.', 'Manafort', 'said.', '“Mr.', 'Trump', 'has', 'given', 'Steve', 'Miller', 'and', 'I', 'very', 'specific', 'directions', 'about', 'how', 'he', 'views', 'the', 'speech,', 'what', 'he', 'wants', 'to', 'communicate,', 'and', 'ways', 'to', 'tie', 'together', 'things', 'that', 'he', 'has', 'been', 'talking', 'about', 'in', 'the', 'campaign.', 'The', 'speech', 'will', 'end', 'up', 'being', 'tone-perfect', 'because', 'the', 'speech’s', 'words', 'will', 'be', 'his', 'words.”', 'Mr.', 'Trump', 'prefers', 'speaking', 'off', 'the', 'cuff', 'with', 'handwritten', 'notes,', 'a', 'style', 'that', 'has', 'proved', 'successful', 'at', 'his', 'rallies,', 'where', 'he', 'has', 'shown', 'a', 'talent', 'for', 'connecting', 'with', 'and', 'electrifying', 'crowds.', 'But', 'his', 'adjustment', 'to', 'formal', 'speeches', 'remains', 'a', 'work', 'in', 'progress:', 'He', 'does', 'not', 'always', 'sound', 'like', 'himself,', 'and', 'reading', 'from', 'a', 'text', 'can', 'detract', 'from', 'the', 'sense', 'of', 'authenticity', 'that', 'his', 'supporters', 'prize.', 'One', 'question', 'is', 'whether,', 'or', 'how', 'much,', 'he', 'will', 'ad-lib.', 'He', 'has', 'sometimes', 'seemed', 'unable', 'to', 'resist', 'deviating', 'from', 'prepared', 'remarks,', 'often', 'to', 'ill', 'effect', '—', 'ranting', 'about', 'a', 'mosquito', ',', 'or', 'joking', 'that', 'a', 'passing', 'airplane', 'was', 'from', 'Mexico', 'and', 'was', '“', 'getting', 'ready', 'to', 'attack', '.”', '“Ad-libbing', 'is', 'instinct,', 'all', 'instinct,”', 'Mr.', 'Trump', 'said.', '“I', 'thought', 'maybe', 'about', 'doing', 'a', 'freewheeling', 'speech', 'for', 'the', 'convention,', 'but', 'that', 'really', 'wouldn’t', 'work.', 'But', 'even', 'with', 'a', 'teleprompter,', 'the', 'speech', 'will', 'be', 'me', '—', 'my', 'ideas,', 'my', 'beliefs,', 'my', 'words.”'] Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] Extractive/present Keyphrases: ['speeches', 'plagiarism'] Abstractive/absent Keyphrases: ['2016 presidential election', 'donald trump', 'republican national convention,rnc', 'melania trump'] Other Metadata: {'id': 'ny0282969', 'categories': ['us', 'politics'], 'date': '2016/07/21', 'title': 'For Donald Trump’s Big Speech, an Added Pressure: No Echoes', 'abstract': 'CLEVELAND — Until Monday night, Donald J. Trump’s biggest concern about his convention speech was how much to reveal about himself and his family in an address that is often the most personal one a presidential candidate delivers. But the political firestorm over his wife’s speech , which borrowed passages from Michelle Obama’s convention remarks in 2008, raised the stakes exponentially. Mr. Trump’s speech on Thursday night cannot merely be his best ever. It also has to be bulletproof. By Tuesday morning, word had spread throughout his campaign that any language in Mr. Trump’s address even loosely inspired by speeches, essays, books or Twitter posts had to be either rewritten or attributed. Mr. Trump’s chief speechwriter, Stephen Miller, reassured colleagues that the acceptance speech was wholly original, according to two staff members who spoke with him and described those conversations on the condition of anonymity. Mr. Miller also told campaign aides that he had looked closely at passages that Mr. Trump had contributed — handwritten on unlined white pages — and was confident they contained no problems. (Mr. Miller declined an interview request.) Even so, one of the staff members downloaded plagiarism-detection software and ran a draft of the speech through the program. No red flags came up. The intense scrutiny of Mr. Trump’s words added new pressure to a speechwriting process that has been one of the most unpredictable and free-form in modern presidential campaigns. A month ago, Mr. Trump began giving dictation on themes for the speech, and he tossed ideas and phrases to Mr. Miller or other advisers on a daily basis. On printed copies of each draft, he circled passages he liked, crossed out or put question marks beside lines that he did not favor and frequently suggested new words or phrases. Image Stephen Miller, left, Mr. Trump’s chief speechwriter, and Paul Manafort, the campaign chairman, before an event for the candidate at the Trump SoHo hotel in New York last month. Credit Damon Winter/The New York Times “I’ve been amending the drafts big-league,” Mr. Trump said in an interview in his Manhattan office before the convention. “I get ideas from a lot of different places, a lot of smart people, but mostly I like language that sounds like me.” Yet in the aftermath of Melania Trump’s speech, campaign advisers have fretted that they do not know for sure where Mr. Trump gets his ideas and language — whether they are his own, in other words, or are picked up from Twitter, television, or, say, a best seller by Bill O’Reilly of Fox News, a commentator whom Mr. Trump likes. Borrowing or adapting may not always be tantamount to plagiarism, but several Trump advisers, who also insisted on anonymity, said that after the furor over Ms. Trump’s remarks, the campaign cannot allow a similar blowup. Ed Rollins, a Republican strategist who is advising a “super PAC” supporting Mr. Trump, said that the candidate could not afford any mistakes. “His speech is the whole game,” Mr. Rollins said. “Viewers have to watch it and say, ‘There is the next president of the United States.’” In the interview, Mr. Trump said his speech would center on his vision of a strong and secure America that “once existed and no longer does, but can again under a Trump administration.” Latest Election Polls 2016 Get the latest national and state polls on the presidential election between Hillary Clinton and Donald J. Trump. His greatest challenge, he said, was “putting myself in the speech” — discussing his upbringing and early experiences and relating them to the hopes and aspirations of other Americans. “I was never comfortable getting personal about my family because I thought it was special territory,” Mr. Trump said, glancing at a picture of his father on his desk. “It can feel exploitative to use family stories to win votes. And I had a very happy and comfortable life growing up. I had a great relationship with my father. But my focus needs to be on all the Americans who are struggling.” He said he was unsure if he would discuss his older brother Fred, who died as an alcoholic in 1981 at 43 — and whom he has described as an example of how destructive choices can damage lives that seem golden. “Without my brother Fred I might not be here,” Mr. Trump said. “He was really smart, great-looking. I don’t drink or smoke because of what happened to him. I focused on building my business and making good choices. I may talk about that, but I don’t know if I should.” Acceptance speeches seldom seem complete without anecdotes about personal trials and triumphs: Mitt Romney, trying to persuade voters to see him as more than a rich businessman, devoted about a fourth of his 2012 address to his parents’ unconditional love, his Mormon faith and reminiscences about watching the moon landing. In 2008 , Barack Obama described how his grandfather benefited from the G.I. Bill and how his mother and grandmother taught him the value of hard work. And Bill Clinton’s 1992 speech vividly recalled the life lessons he learned from his mother about fighting and working hard, from his grandfather about racial equality — and from his wife, Hillary, who, Mr. Clinton said, taught him that every child could learn. Mr. Clinton finished his speech with a now-famous line tying his Arkansas hometown to the American dream. “I end tonight where it all began for me,” he said. “I still believe in a place called Hope.” James Carville, a senior strategist for Mr. Clinton’s 1992 campaign, said that if Mr. Trump hoped to change the minds of those who see him as divisive or bigoted, he would need to open himself up to voters in meaningfully personal ways in his speech. “If he’s really different than the way he seems in television interviews or at his rallies, Thursday’s speech will be his single greatest opportunity to show voters who he really is,” Mr. Carville said. Paul Manafort, the Trump campaign chairman, said that Thursday’s speech would be “very much a reflection of Mr. Trump’s own words, as opposed to remarks that others create and the campaign puts in his mouth.” “He’s not an editor — he is actually the creator of the speech,” Mr. Manafort said. “Mr. Trump has given Steve Miller and I very specific directions about how he views the speech, what he wants to communicate, and ways to tie together things that he has been talking about in the campaign. The speech will end up being tone-perfect because the speech’s words will be his words.” Mr. Trump prefers speaking off the cuff with handwritten notes, a style that has proved successful at his rallies, where he has shown a talent for connecting with and electrifying crowds. But his adjustment to formal speeches remains a work in progress: He does not always sound like himself, and reading from a text can detract from the sense of authenticity that his supporters prize. One question is whether, or how much, he will ad-lib. He has sometimes seemed unable to resist deviating from prepared remarks, often to ill effect — ranting about a mosquito , or joking that a passing airplane was from Mexico and was “ getting ready to attack .” “Ad-libbing is instinct, all instinct,” Mr. Trump said. “I thought maybe about doing a freewheeling speech for the convention, but that really wouldn’t work. But even with a teleprompter, the speech will be me — my ideas, my beliefs, my words.”', 'keyword': '2016 Presidential Election;Donald Trump;Republican National Convention,RNC;Speeches;Plagiarism;Melania Trump'} ----------- Sample from validation data split Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata'] Tokenized Document: ['Jack', 'Sock', 'Picks', 'Up', 'Where', 'He', 'Left', 'Off', 'at', 'Last', 'Year’s', 'U.S.', 'Open', 'When', 'we', 'last', 'saw', 'Jack', 'Sock', 'at', 'the', 'United', 'States', 'Open', ',', 'a', 'year', 'ago', 'September,', 'he', 'was', 'holding', 'a', 'trophy', 'over', 'his', 'head', 'and', '—', 'not', 'yet', '19', 'and', 'a', 'newly', 'declared', 'professional', '—', 'being', 'hailed', 'a', 'Grand', 'Slam', 'champion.', 'Granted,', 'as', 'major', 'titles', 'go,', 'mixed', 'doubles', '(with', 'Melanie', 'Oudin)', 'was', 'akin', 'to', 'a', 'serving', 'of', 'cheese', 'and', 'crackers,', 'with', 'the', 'steak,', 'or', 'singles', 'title,', 'still', 'lodged', 'in', 'the', 'freezer.', 'But', 'as', 'Sock', 'had', 'the', 'previous', 'year', 'also', 'won', 'the', 'junior', 'boys', 'title', 'in', 'Flushing', 'Meadows', 'and,', 'with', 'legend', 'holding', 'that', 'he', 'had', 'never', 'lost', 'a', 'high', 'school', 'match,', 'it', 'was', 'natural', '—', 'at', 'least', 'hopeful', '—', 'to', 'think', 'he', 'might', 'have', 'a', 'healthy', 'share', 'of', 'winning', 'genes', 'to', 'go', 'with', 'his', 'booming', 'serve.', 'And', 'his', 'name,', 'for', 'goodness', 'sakes,', 'is', 'Jack', 'Sock;', 'of', 'Lincoln,', 'Neb.,', 'a', 'proud', 'Cornhusker.', 'Does', 'it', 'get', 'any', 'more', 'wholesome', 'and', 'hearty', 'for', 'a', 'country', 'in', 'a', 'continuous', 'search', 'for', 'its', 'next', 'men’s', 'star', 'in', 'this', 'athletically', 'enhanced', 'smash-mouth', 'era?', 'So', 'after', 'Sock', 'introduced', 'himself', 'to', 'Florian', 'Mayer,', 'a', 'German', 'seeded', '22nd,', 'with', 'a', 'sizzling', 'ace', 'down', 'the', 'T', 'and', 'held', 'serve', 'to', 'begin', 'a', 'first-round', 'match', 'Monday', 'on', 'the', 'grandstand', 'court,', 'fans', 'responded', 'with', 'a', 'chant', 'of', '“Let’s', 'Go', 'Sock!”', 'Forgetting', 'for', 'the', 'moment', 'that', 'New', 'York', 'is', 'a', 'Yankees', 'town,', 'it', 'was', 'better', 'than', 'one', 'alternative', '—', 'Sock', 'it', 'to', 'him', '—', 'and', 'completely', 'understandable', 'as', 'Sock', 'was', 'in', 'the', 'process', 'of', 'feeding', 'America’s', 'slam', 'its', 'first', 'helping', 'of', 'nationalistic', 'fervor', 'by', 'overpowering', 'Mayer,', 'who', 'retired', 'while', 'trailing,', '6-3,', '6-2,', '3-2.', 'One', 'or', 'two', 'more', 'performances', 'like', 'this', 'and', 'we', 'can', 'expect', 'a', 'slew', 'of', 'word', 'play', 'headlines,', 'beginning', 'with', 'Sock', 'and', 'Awe.', 'It', 'doesn’t', 'take', 'much', 'to', 'fire', 'up', 'the', 'Next', 'Great', 'American', 'news', 'media', 'machine,', 'not', 'that', 'Sock', 'is', 'lacking', 'in', 'confidence', 'or', 'ambition.', '“I', 'feel', 'like', 'my', 'game', 'is', 'right', 'on', 'the', 'verge', 'of', 'going', 'to', 'the', 'next', 'level,”', 'he', 'said', 'after', 'winning', 'his', 'fourth', 'tour', 'match', 'of', '2012', 'against', 'six', 'losses.', 'To', 'explain', 'what', 'he', 'meant', 'of', 'taking', 'his', 'game', 'to', 'the', '“next', 'level,”', 'put', 'it', 'this', 'way:', 'from', 'his', 'current', 'ranking,', '243,', 'there', 'are', 'many', 'stops', 'to', 'make', 'on', 'the', 'ride', 'to', 'the', 'dizzying', 'heights', 'where', 'Roger', 'Federer', 'and', 'elite', 'company', 'reside', '—', 'beginning', 'with', 'leaping', 'into', 'position', 'near', 'another', 'young', 'and', 'hopeful', 'Yank,', 'Ryan', 'Harrison,', 'currently', 'No.', '61.', 'On', 'the', 'scale', 'of', 'youthful', 'and', 'potential', 'men’s', 'tour', 'heirs,', 'the', '21-year-old', 'Milos', 'Raonic', 'of', 'Canada', 'is', 'the', 'closest', 'to', 'a', 'major', 'breakthrough,', 'though', 'it', 'is', 'also', 'difficult', 'to', 'define', 'what', 'even', 'that', 'means', 'when', 'three', 'players', '—', 'Federer,', 'Rafael', 'Nadal', 'and', 'Novak', 'Djokovic', '—', 'have', 'won', '29', 'of', 'the', 'last', '30', 'slam', 'titles', 'and', 'show', 'little', 'inclination', 'of', 'easing', 'their', 'chokehold.', 'Compared', 'with', 'what', 'the', 'more', 'promising', 'newbies', 'face', 'these', 'days,', 'the', 'emergent', 'superstars', 'of', 'yore', 'practically', 'took', 'their', 'Grand', 'Slam', 'treats', 'by', 'merely', 'growing', 'tall', 'enough', 'to', 'reach', 'into', 'the', 'cookie', 'jar.', 'Boris', 'Becker', 'won', 'Wimbledon', 'as', 'a', '17-year-old', 'mop-haired', 'redhead.', 'John', 'McEnroe', 'and', 'Pete', 'Sampras', 'broke', 'through', 'in', 'New', 'York', 'at', '20', 'and', '19.', 'Into', 'the', '21st', 'century,', 'Nadal', 'began', 'his', 'domination', 'of', 'the', 'French', 'Open', 'at', '19,', 'Djokovic', 'won', 'the', 'Australian', 'Open', 'at', '21', 'and', 'Federer', 'sank', 'to', 'his', 'knees', 'at', 'Wimbledon', 'weeks', 'before', 'turning', '22.', 'These', 'days,', 'it', 'is', 'unfathomable', 'to', 'think', 'of', 'a', 'skinny', 'and', 'moon-balling', 'Michael', 'Chang', 'winning', 'the', 'French', 'Open', 'at', '17,', 'as', 'he', 'did', 'in', '1989,', 'or', 'a', 'teenager', 'winning', 'any', 'of', 'the', 'slams.', '“I', 'don’t', 'think', 'that’s', 'going', 'to', 'be', 'the', 'case', 'any', 'time', 'soon', 'because', 'this', 'game', 'is', 'so', 'physical', 'now', 'and', 'people', 'need', 'to', 'grow', 'into', 'their', 'body,”', 'said', 'John', 'Isner,', 'who', 'at', '27', 'has', 'reason', 'to', 'believe', 'that', 'his', 'best', 'results,', 'whatever', 'they', 'may', 'be,', 'are', 'still', 'ahead', 'of', 'him.', 'At', '31,', 'Federer,', 'who', 'absurdly', 'has', 'not', 'missed', 'a', 'Grand', 'Slam', 'tournament', 'in', '13', 'years,', 'may', 'be', 'the', 'best-conditioned', 'of', 'all.', 'Andy', 'Murray,', 'at', '25,', 'is', 'thought', 'to', 'be', 'on', 'the', 'verge', 'of', 'his', 'prime.', 'It', 'is', 'mind-boggling', 'to', 'think', 'that', 'Bjorn', 'Borg,', 'McEnroe,', 'Becker', 'and', 'others', 'were', 'playing', 'on', 'fumes,', 'their', 'best', 'matches', 'behind', 'them,', 'by', 'their', 'mid-20s.', 'A', 'no-kidding', 'adult’s', 'tour', 'that', 'provides', 'longevity', 'and', 'personal', 'context', 'is', 'so', 'much', 'richer', 'than', 'the', 'alternative.', 'But', 'given', 'such', 'dramatic', 'career', 'clock', 'changes,', 'patience', 'may', 'be', 'a', 'most', 'valuable', 'virtue', 'for', 'players', 'like', 'Raonic,', 'Harrison', 'and', 'Bernard', 'Tomic', 'of', 'Australia.', '“Those', 'guys,', 'it', 'might', 'take', 'them', 'a', 'little', 'while', 'to', 'see', 'their', 'very,', 'very', 'best', 'results,', 'but', 'they’re', 'certainly', 'not', 'doing', 'so', 'bad', 'right', 'now,”', 'said', 'Isner,', 'who', 'didn’t', 'hesitate', 'to', 'include', 'Sock,', 'calling', 'him', '“a', 'very', 'good', 'player.”', 'Sock', 'is', 'a', 'strapping', '’Husker,', '6', 'feet', '1', 'inch,', '180', 'pounds,', 'but', 'he', 'was', 'set', 'back', 'physically', 'in', 'March', 'by', 'surgery', 'to', 'repair', 'a', 'torn', 'abdominal', 'muscle.', 'In', 'a', 'brilliant', 'stroke,', 'he', 'has', 'been', 'working', 'in', 'Las', 'Vegas', 'with', 'the', 'trainer', 'Gil', 'Reyes,', 'who', 'whipped', 'the', 'once-profligate', 'Andre', 'Agassi', 'into', 'shape.', 'He', 'has', 'hired', 'the', 'former', 'Swedish', 'player,', 'Joakim', 'Nystrom,', 'to', 'help', 'him', 'play', 'a', 'more', 'patient', 'game.', 'On', 'today’s', 'altered', 'career', 'time', 'clock,', 'there', 'is', 'no', 'choice', 'but', 'to', 'wait', 'one’s', 'turn', 'and', 'see', 'what', 'happens.', 'In', 'a', 'microcosm', 'of', 'that', 'strategy,', 'Sock', 'fell', 'behind,', '0-40,', 'while', 'serving', 'at', '4-2', 'in', 'the', 'first', 'set,', 'rallied', 'to', 'deuce,', 'kept', 'his', 'cool', 'as', 'Mayer', 'challenged', 'two', 'line', 'calls', 'and', 'won', 'both,', 'and', 'wound', 'up', 'winning', 'the', 'long', 'game', 'with', 'the', 'help', 'of', 'his', 'own', 'challenge', 'of', 'an', 'out', 'call.', 'He', 'was', 'never', 'threatened', 'after', 'that,', 'cranking', 'his', 'first', 'serve', 'as', 'high', 'as', '134', 'miles', 'per', 'hour,', 'winning', '17', 'of', '25', 'second-service', 'points', 'and', 'shrugging', 'off', 'the', 'question', 'of', 'when', 'the', 'Next', 'Great', 'American', 'will', 'arrive', 'as', 'easily', 'as', 'he', 'did', 'Mayer.', '“Until', 'the', 'results', 'are', 'there,', 'until', 'the', 'rankings', 'and', 'everything', 'is', 'there,', 'not', 'a', 'different', 'answer', 'to', 'give,”', 'he', 'said.', 'Give', 'him', 'time,', 'in', 'other', 'words.', 'By', 'today’s', 'standards,', 'he’s', 'got', 'a', 'few', 'years', 'before', 'we', 'have', 'to', 'stop', 'asking.'] Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] Extractive/present Keyphrases: [] Abstractive/absent Keyphrases: ['tennis', 'united states open (tennis)', 'sock jack'] Other Metadata: {'id': 'ny0125215', 'categories': ['sports', 'tennis'], 'date': '2012/08/28', 'title': 'Jack Sock Picks Up Where He Left Off at Last Year’s U.S. Open', 'abstract': 'When we last saw Jack Sock at the United States Open , a year ago September, he was holding a trophy over his head and — not yet 19 and a newly declared professional — being hailed a Grand Slam champion. Granted, as major titles go, mixed doubles (with Melanie Oudin) was akin to a serving of cheese and crackers, with the steak, or singles title, still lodged in the freezer. But as Sock had the previous year also won the junior boys title in Flushing Meadows and, with legend holding that he had never lost a high school match, it was natural — at least hopeful — to think he might have a healthy share of winning genes to go with his booming serve. And his name, for goodness sakes, is Jack Sock; of Lincoln, Neb., a proud Cornhusker. Does it get any more wholesome and hearty for a country in a continuous search for its next men’s star in this athletically enhanced smash-mouth era? So after Sock introduced himself to Florian Mayer, a German seeded 22nd, with a sizzling ace down the T and held serve to begin a first-round match Monday on the grandstand court, fans responded with a chant of “Let’s Go Sock!” Forgetting for the moment that New York is a Yankees town, it was better than one alternative — Sock it to him — and completely understandable as Sock was in the process of feeding America’s slam its first helping of nationalistic fervor by overpowering Mayer, who retired while trailing, 6-3, 6-2, 3-2. One or two more performances like this and we can expect a slew of word play headlines, beginning with Sock and Awe. It doesn’t take much to fire up the Next Great American news media machine, not that Sock is lacking in confidence or ambition. “I feel like my game is right on the verge of going to the next level,” he said after winning his fourth tour match of 2012 against six losses. To explain what he meant of taking his game to the “next level,” put it this way: from his current ranking, 243, there are many stops to make on the ride to the dizzying heights where Roger Federer and elite company reside — beginning with leaping into position near another young and hopeful Yank, Ryan Harrison, currently No. 61. On the scale of youthful and potential men’s tour heirs, the 21-year-old Milos Raonic of Canada is the closest to a major breakthrough, though it is also difficult to define what even that means when three players — Federer, Rafael Nadal and Novak Djokovic — have won 29 of the last 30 slam titles and show little inclination of easing their chokehold. Compared with what the more promising newbies face these days, the emergent superstars of yore practically took their Grand Slam treats by merely growing tall enough to reach into the cookie jar. Boris Becker won Wimbledon as a 17-year-old mop-haired redhead. John McEnroe and Pete Sampras broke through in New York at 20 and 19. Into the 21st century, Nadal began his domination of the French Open at 19, Djokovic won the Australian Open at 21 and Federer sank to his knees at Wimbledon weeks before turning 22. These days, it is unfathomable to think of a skinny and moon-balling Michael Chang winning the French Open at 17, as he did in 1989, or a teenager winning any of the slams. “I don’t think that’s going to be the case any time soon because this game is so physical now and people need to grow into their body,” said John Isner, who at 27 has reason to believe that his best results, whatever they may be, are still ahead of him. At 31, Federer, who absurdly has not missed a Grand Slam tournament in 13 years, may be the best-conditioned of all. Andy Murray, at 25, is thought to be on the verge of his prime. It is mind-boggling to think that Bjorn Borg, McEnroe, Becker and others were playing on fumes, their best matches behind them, by their mid-20s. A no-kidding adult’s tour that provides longevity and personal context is so much richer than the alternative. But given such dramatic career clock changes, patience may be a most valuable virtue for players like Raonic, Harrison and Bernard Tomic of Australia. “Those guys, it might take them a little while to see their very, very best results, but they’re certainly not doing so bad right now,” said Isner, who didn’t hesitate to include Sock, calling him “a very good player.” Sock is a strapping ’Husker, 6 feet 1 inch, 180 pounds, but he was set back physically in March by surgery to repair a torn abdominal muscle. In a brilliant stroke, he has been working in Las Vegas with the trainer Gil Reyes, who whipped the once-profligate Andre Agassi into shape. He has hired the former Swedish player, Joakim Nystrom, to help him play a more patient game. On today’s altered career time clock, there is no choice but to wait one’s turn and see what happens. In a microcosm of that strategy, Sock fell behind, 0-40, while serving at 4-2 in the first set, rallied to deuce, kept his cool as Mayer challenged two line calls and won both, and wound up winning the long game with the help of his own challenge of an out call. He was never threatened after that, cranking his first serve as high as 134 miles per hour, winning 17 of 25 second-service points and shrugging off the question of when the Next Great American will arrive as easily as he did Mayer. “Until the results are there, until the rankings and everything is there, not a different answer to give,” he said. Give him time, in other words. By today’s standards, he’s got a few years before we have to stop asking.', 'keyword': 'Tennis;United States Open (Tennis);Sock Jack'} ----------- Sample from test data split Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata'] Tokenized Document: ['World', 'records', 'no', 'joke', 'to', 'frustrated', 'Pakistanis', 'ISLAMABAD', '-', 'One', 'young', 'contender', 'created', 'the', 'world’s', 'largest', 'sequin', 'mosaic', 'using', '325,000', 'of', 'the', 'sparkly', 'discs.', 'Two', 'other', 'youths', 'achieved', '123', 'consecutive', 'badminton', 'passes', 'in', 'one', 'minute.', 'And', '1,450', 'participants', 'broke', 'the', 'record', 'for', 'the', 'most', 'people', 'arm', 'wrestling.', 'Such', 'are', 'the', 'skills', 'that', 'Guinness', 'World', 'Records', 'are', 'made', 'of', 'in', 'Pakistan,', 'where', 'thousands', 'of', 'young', 'people', 'are', 'groomed', 'to', 'establish', 'their', 'unique', 'feats', 'for', 'posterity.', 'Last', 'week,', 'the', 'contestants', 'came', 'together', 'for', 'the', 'annual', 'Punjab', 'Youth', 'Festival', 'to', 'show', 'their', 'stuff', '—', 'many', 'in', 'athletics,', 'but', 'others', 'in', 'downright', 'quirky', 'displays,', 'including', 'one', 'young', 'boy', 'who', 'achieved', 'fame', 'by', 'kicking', '50', 'coconuts', 'from', 'on', 'top', 'of', 'the', 'heads', 'of', 'a', 'row', 'of', 'people.', 'It', 'seems', 'Pakistan', 'has', 'become', 'a', 'world', 'record-creating', 'machine,', 'with', 'the', 'coordinated', 'effort', 'reaping', 'an', 'impressive', '23', 'world', 'records,', 'event', 'organizers', 'boasted.', 'The', 'push', 'for', 'inclusion', 'of', 'Pakistanis', 'in', 'the', 'venerable', 'Guinness', 'World', 'Records', 'entries', '(which', 'began', 'in', 'book', 'form', 'in', '1955)', 'stems', 'in', 'part', 'from', 'festival', 'organizers’', 'desire', 'to', 'boost', 'the', 'image', 'of', 'a', 'country', 'often', 'associated', 'with', 'militancy,', 'religious', 'strife', 'and', 'economic', 'decline.', 'There', 'is', 'a', 'patriotic', 'element,', 'as', 'well:', 'Last', 'October,', 'for', 'instance,', '42,813', 'Pakistanis', 'got', 'together', 'in', 'a', 'Lahore', 'hockey', 'stadium', 'to', 'belt', 'out', 'the', 'national', 'anthem', 'and', 'create', 'yet', 'another', 'world', 'record', 'for', 'the', 'most', 'people', 'singing', 'their', 'country’s', 'anthem.', 'Days', 'later,', 'another', '24,200', 'people', 'held', 'green', 'and', 'white', 'boxes', '—', 'the', 'colors', 'of', 'the', 'national', 'flag', 'of', 'Pakistan', '—', 'to', 'set', 'the', 'world', 'record', 'for', 'creating', 'the', 'largest', 'human', 'flag.', 'Although', 'some', 'of', 'the', 'records', 'might', 'seem', 'amusing', 'to', 'others', '—', 'coconut', 'kicking', 'champ', 'Mohammad', 'Rashid', 'of', 'Karachi', 'last', 'week', 'claimed', 'his', 'fourth', 'world', 'record', 'by', 'breaking', '34', 'pine', 'boards', 'in', '32', 'seconds', 'with', 'his', 'head', '—', 'the', 'competitions', 'were', 'no', 'laughing', 'matter', 'to', 'participants.', 'Usman', 'Anwar,', 'director', 'of', 'the', 'Punjab', 'Youth', 'Festival,', 'explained', 'that', 'the', 'kids', 'have', 'been', 'training', 'for', 'eight', 'months.', '“We', 'started', 'at', 'the', 'neighborhood', 'and', 'village', 'level', 'so', 'that', 'children', 'could', 'come', 'out', 'and', 'participate,”', 'said', 'Anwar.', '“Our', 'main', 'objective', 'was', 'to', 'inculcate', 'interest', 'for', 'sports', 'in', 'the', 'public.”', 'Young', 'people', 'from', 'over', '55,000', 'neighborhood', 'and', 'village', 'councils', 'vied', 'for', 'a', 'chance', 'to', 'compete', 'in', 'the', 'games.', '“We', 'were', 'able', 'to', 'select', 'the', 'best', 'of', 'the', 'best', 'to', 'train', 'for', 'the', 'world', 'records,”', 'said', 'Anwar.', 'Because', 'of', 'terrorism,', 'political', 'upheaval', 'and', 'widespread', 'unemployment,', 'many', 'young', 'people', 'appear', 'to', 'have', 'little', 'hope', 'for', 'the', 'future,', 'says', 'Hafeez', 'Rehman,', 'a', 'professor', 'in', 'the', 'anthropology', 'department', 'at', 'Quaid-i-Azam', 'University', 'in', 'the', 'capital,', 'Islamabad.', 'Sports', 'competitions,', 'Rehman', 'said,', 'create', 'an', 'opportunity', 'for', 'youth', 'to', 'excel', 'personally', 'and', 'also', 'to', 'improve', 'Pakistan’s', 'image.', '“We', 'have', 'energetic', 'youth.', 'Pakistan', 'has', 'more', 'than', '55', 'million', 'young', 'people.', 'It', 'becomes', 'an', 'asset', 'for', 'the', 'country,”', 'he', 'added.', 'The', 'festival', 'itself', 'has', 'become', 'part', 'of', 'the', 'record-setting', 'mania.', 'It', 'was', 'recognized', 'for', 'having', 'more', 'participants', '—', '3.3', 'million,', 'most', 'of', 'whom', 'registered', 'online,', 'according', 'to', 'Anwar', '—', 'constituting', 'a', 'world', 'record', 'for', 'sporting', 'events.'] Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] Extractive/present Keyphrases: ['pakistan', 'guinness'] Abstractive/absent Keyphrases: ['india'] Other Metadata: {'id': 'jp0000001', 'categories': ['asia-pacific', 'offbeat-asia-pacific'], 'date': '2013/03/17', 'title': 'World records no joke to frustrated Pakistanis ', 'abstract': 'ISLAMABAD - One young contender created the world’s largest sequin mosaic using 325,000 of the sparkly discs. Two other youths achieved 123 consecutive badminton passes in one minute. And 1,450 participants broke the record for the most people arm wrestling. Such are the skills that Guinness World Records are made of in Pakistan, where thousands of young people are groomed to establish their unique feats for posterity. Last week, the contestants came together for the annual Punjab Youth Festival to show their stuff — many in athletics, but others in downright quirky displays, including one young boy who achieved fame by kicking 50 coconuts from on top of the heads of a row of people. It seems Pakistan has become a world record-creating machine, with the coordinated effort reaping an impressive 23 world records, event organizers boasted. The push for inclusion of Pakistanis in the venerable Guinness World Records entries (which began in book form in 1955) stems in part from festival organizers’ desire to boost the image of a country often associated with militancy, religious strife and economic decline. There is a patriotic element, as well: Last October, for instance, 42,813 Pakistanis got together in a Lahore hockey stadium to belt out the national anthem and create yet another world record for the most people singing their country’s anthem. Days later, another 24,200 people held green and white boxes — the colors of the national flag of Pakistan — to set the world record for creating the largest human flag. Although some of the records might seem amusing to others — coconut kicking champ Mohammad Rashid of Karachi last week claimed his fourth world record by breaking 34 pine boards in 32 seconds with his head — the competitions were no laughing matter to participants. Usman Anwar, director of the Punjab Youth Festival, explained that the kids have been training for eight months. “We started at the neighborhood and village level so that children could come out and participate,” said Anwar. “Our main objective was to inculcate interest for sports in the public.” Young people from over 55,000 neighborhood and village councils vied for a chance to compete in the games. “We were able to select the best of the best to train for the world records,” said Anwar. Because of terrorism, political upheaval and widespread unemployment, many young people appear to have little hope for the future, says Hafeez Rehman, a professor in the anthropology department at Quaid-i-Azam University in the capital, Islamabad. Sports competitions, Rehman said, create an opportunity for youth to excel personally and also to improve Pakistan’s image. “We have energetic youth. Pakistan has more than 55 million young people. It becomes an asset for the country,” he added. The festival itself has become part of the record-setting mania. It was recognized for having more participants — 3.3 million, most of whom registered online, according to Anwar — constituting a world record for sporting events.', 'keyword': 'india;pakistan;guinness'} ----------- ``` ### Keyphrase Extraction ```python from datasets import load_dataset # get the dataset only for keyphrase extraction dataset = load_dataset("midas/kptimes", "extraction") print("Samples for Keyphrase Extraction") # sample from the train split print("Sample from training data split") train_sample = dataset["train"][0] print("Fields in the sample: ", [key for key in train_sample.keys()]) print("Tokenized Document: ", train_sample["document"]) print("Document BIO Tags: ", train_sample["doc_bio_tags"]) print("\n-----------\n") # sample from the validation split print("Sample from validation data split") validation_sample = dataset["validation"][0] print("Fields in the sample: ", [key for key in validation_sample.keys()]) print("Tokenized Document: ", validation_sample["document"]) print("Document BIO Tags: ", validation_sample["doc_bio_tags"]) print("\n-----------\n") # sample from the test split print("Sample from test data split") test_sample = dataset["test"][0] print("Fields in the sample: ", [key for key in test_sample.keys()]) print("Tokenized Document: ", test_sample["document"]) print("Document BIO Tags: ", test_sample["doc_bio_tags"]) print("\n-----------\n") ``` ### Keyphrase Generation ```python # get the dataset only for keyphrase generation dataset = load_dataset("midas/kptimes", "generation") print("Samples for Keyphrase Generation") # sample from the train split print("Sample from training data split") train_sample = dataset["train"][0] print("Fields in the sample: ", [key for key in train_sample.keys()]) print("Tokenized Document: ", train_sample["document"]) print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"]) print("\n-----------\n") # sample from the validation split print("Sample from validation data split") validation_sample = dataset["validation"][0] print("Fields in the sample: ", [key for key in validation_sample.keys()]) print("Tokenized Document: ", validation_sample["document"]) print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"]) print("\n-----------\n") # sample from the test split print("Sample from test data split") test_sample = dataset["test"][0] print("Fields in the sample: ", [key for key in test_sample.keys()]) print("Tokenized Document: ", test_sample["document"]) print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"]) print("\n-----------\n") ``` ## Citation Information ``` @inproceedings{gallina2019kptimes, title={KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents}, author={Gallina, Ygor and Boudin, Florian and Daille, B{\'e}atrice}, booktitle={Proceedings of the 12th International Conference on Natural Language Generation}, pages={130--135}, year={2019} } ``` ## Contributions Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset
nateraw/imagenette
2021-09-26T08:00:07.000Z
[ "region:us" ]
nateraw
Imagenette is a subset of 10 easily classified classes from the Imagenet dataset. It was originally prepared by Jeremy Howard of FastAI. The objective behind putting together a small version of the Imagenet dataset was mainly because running new ideas/algorithms/experiments on the whole Imagenet take a lot of time. This version of the dataset allows researchers/practitioners to quickly try out ideas and share with others. The dataset comes in three variants: * Full size * 320 px * 160 px Note: The v2 config correspond to the new 70/30 train/valid split (released in Dec 6 2019).
@misc{imagenette, author = "Jeremy Howard", title = "imagenette", url = "https://github.com/fastai/imagenette/" }
null
2
4
Entry not found
phongdtd/youtube_casual_audio
2022-11-01T13:23:24.000Z
[ "task_categories:automatic-speech-recognition", "source_datasets:extended|youtube", "region:us" ]
phongdtd
\
null
null
3
4
--- multilinguality: vi: - 190K<n<200K source_datasets: - extended|youtube task_categories: - automatic-speech-recognition task_ids: [] Pretty_name: Youtube Casual Audio Annotations_creators: - crowdsourced Language_creators: - datlq Languages: - vi Licenses: - cc0-1.0 --- # Dataset Card for common_voice ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary [Needs More Information] ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Vietnamese ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment. ` { 'file_path': 'audio/_1OsFqkFI38_34.304_39.424.wav', 'script': 'Ik vind dat een dubieuze procedure.', 'audio': {'path': 'audio/_1OsFqkFI38_34.304_39.424.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000} ` ### Data Fields file_path: The path to the audio file audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. script: The sentence the user was prompted to speak ### Data Splits The speech material has been subdivided into portions for train, test, validated. The val, test, train are all data that has been reviewed, deemed of high quality and split into val, test and train. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information] ### Contributions Thanks to [@datlq](https://github.com/datlq98) for adding this dataset.
princeton-nlp/datasets-for-simcse
2021-09-03T12:44:29.000Z
[ "region:us" ]
princeton-nlp
null
null
null
1
4
Entry not found
rbawden/DiaBLa
2022-10-25T14:21:10.000Z
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:fr", "license:cc-by-sa-4.0", "region:us" ]
rbawden
null
null
null
0
4
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en - fr license: - cc-by-sa-4.0 multilinguality: - translation size_categories: - 1K<n<10K source_datasets: - original task_categories: - translation task_ids: [] pretty_name: DiaBLa language_bcp47: - en-UK - fr-FR --- # Dataset Card for DiaBLa: Bilingual dialogue parallel evaluation set ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [almanach.inria.fr/software_and_resources/custom/DiaBLa-en.html](http://almanach.inria.fr/software_and_resources/custom/DiaBLa-en.html) - **Repository:** [github.com/rbawden/DiaBLa-dataset](https://github.com/rbawden/DiaBLa-dataset) - **Paper:** [Bawden et al. (2021). DiaBLa: A Corpus of Bilingual Spontaneous Written Dialogues for Machine Translation. Language Resources and Evaluation(55). Pages 635–660. Springer Verlag. 10.1007/s10579-020-09514-4.](https://hal.inria.fr/hal-03021633) - **Point of contact:** rachel.bawden[at]inria.fr ### Dataset Summary The dataset is an English-French dataset for the evaluation of Machine Translation (MT) for informal, written bilingual dialogue. The dataset contains 144 spontaneous dialogues (5,700+ sentences) between native English and French speakers, mediated by one of two neural MT systems in a range of role-play settings. See below for some basic statistics. The dialogues are accompanied by fine-grained sentence-level judgments of MT quality, produced by the dialogue participants themselves, as well as by manually normalised versions and reference translations produced a posteriori. See here for information about evaluation. The motivation for the corpus is two-fold: to provide: - a unique resource for evaluating MT models for dialogue (i.e. in context) - a corpus for the analysis of MT-mediated communication ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English (mainly UK) and French ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 37 MB - **Number of parallel utterances:** 5748 Each example is highly annotated and is associated with dialogue context. An example from the test set looks as follows (only the first and last utterances of the dialogue history are shown for readability purposes): ``` { "id": "dialogue-2018-04-25T16-20-36.087170_french_english_1_2_25", "mt": "Tu m'en veux pour \u00e7a ?", "norm": "", "orig": "Are you blaming me for this?", "ref": "C'est moi que vous critiquez pour \u00e7a\u00a0?", "utterance_meta": { "eval_judgment": "medium", "eval_verbatim": "", "eval_problems": [ "coherence" ], "lang": "english" }, "dialogue_meta": { "start_time": "2018-04-25T16:20:36.087170", "end_time": "", "translation_model": "baseline", "final_evaluation_user1": { "style": "average", "coherence": "average", "grammaticality": "good", "meaning": "average", "word_choice": "average" }, "final_evaluation_user2": { "style": "", "coherence": "", "grammaticality": "", "meaning": "", "word_choice": "" }, "scenario": [ [ "You are both stuck in a lift at work.", "Vous \u00eates tous les deux bloqu\u00e9(e)s dans un ascenseur au travail." ], [ "You are an employee and you are with your boss.", "Vous \u00eates un(e) employ\u00e9(e) et vous \u00eates avez votre patron(ne)" ], [ "You are the boss and are with an employee.", "Vous \u00eates le ou la patron(ne) et vous \u00eates avec un(e) employ\u00e9(e)" ] ], "user1": { "role_num": 1, "role": [ "You are an employee and you are with your boss.", "Vous \u00eates un(e) employ\u00e9(e) et vous \u00eates avez votre patron(ne)" ], "initiated_dialogue": true, "turn_number": 2, "lang": "french" }, "user2": { "role_num": 2, "role": [ "You are the boss and are with an employee.", "Vous \u00eates le ou la patron(ne) et vous \u00eates avec un(e) employ\u00e9(e)" ], "initiated_dialogue": false, "turn_number": 1, "lang": "english" } }, "dialogue_history": [ { "id": "dialogue-2018-04-25T16-20-36.087170_french_english_1_2_0", "orig": "We appear to have stopped moving.", "norm": "", "mt": "On semble avoir arr\u00eat\u00e9 de bouger.", "ref": "J'ai l'impression qu'on s'est arr\u00eat\u00e9s.", "utterance_meta": { "eval_judgment": "medium", "eval_verbatim": "", "eval_problems": [ "style" ], "lang": "english" } }, [...] { "id": "dialogue-2018-04-25T16-20-36.087170_french_english_1_2_24", "orig": "La sonnerie s'est arr\u00eat\u00e9, je pense que personne ne va nous r\u00e9pondre.", "norm": "", "mt": "The ringing stopped, and I don't think anyone's gonna answer us.", "ref": "It stopped ringing. I don't think anybody's going to reply.", "utterance_meta": { "eval_judgment": "perfect", "eval_verbatim": "", "eval_problems": [], "lang": "french" } } ] } ``` ### Data Fields #### plain_text - `id`: a `string` feature. - `orig`: a `string` feature. - `norm`: a `string` feature. - `mt`: a `string` feature. - `ref`: a `string` feature. - `utterance_meta`: a dictionary feature containing: - `eval_judgment`: a `string` feature. - `eval_verbatim`: a `string` feature. - `eval_problems`: a list feature containing: - up to 5 `string` features. - `lang`: a `string` feature. - `dialogue_meta`: a dictionary feature containing: - `start_time` : a `string` feature. - `end_time`: a `string` feature. - `translation_model`: a `string` feature. - `final_evaluation_user1`: a dictionary feature containing: - `style`: a `string` feature. - `coherence`: a `string` feature. - `grammaticality`: a `string` feature. - `meaning`: a `string` feature. - `word_choice`: a `string` feature. - `final_evaluation_user2`: a dictionary feature containing: - `style`: a `string` feature. - `coherence`: a `string` feature. - `grammaticality`: a `string` feature. - `meaning`: a `string` feature. - `word_choice`: a `string` feature. - `scenario`: a list feature containing - 3 lists each containing 2 `string` features. - `user1`: a dictionary feature containing: - `role_num`: an `int` feature. - `role`: a list feature containing: - 2 `string` features. - `initiated_dialogue`: a `bool` feature. - `turn_number`: an `int` value. - `lang`: a `string` value. - `user2`: a dictionary feature containing: - `role_num`: an `int` feature. - `role`: a list feature containing: - 2 `string` features. - `initiated_dialogue`: a `bool` feature. - `turn_number`: an `int` value. - `lang`: a `string` value. - `dialogue_history`: a list feature containing: - dictionary features containing: - `id`: a `string` feature. - `orig`: a `string` feature. - `norm`: a `string` feature. - `mt`: a `string` feature. - `ref`: a `string` feature. - `utterance_meta`: a dictionary feature containing: - `eval_judgment`: a `string` feature. - `eval_verbatim`: a `string` feature. - `eval_problems`: a list feature containing: - up to 5 `string` features. - `lang`: a `string` feature. ### Data Splits DiaBLa is a test set only. | name |test | |----------|------:| |plain_text| 5748| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Original data was collected through a [dedicated online chat platform](https://github.com/rbawden/diabla-chat-interface) and involved native speakers of English and of French. As well as producing the original text, participants also annotated the quality of the machine-translated outputs of their partners' utterances (which they saw instead of their partners' original text) based on their monolingual intuitions and the dialogue context. Each dialogue is assigned one of 12 role-play scenarios and where appropriate each participant is assigned a role to play in the dialogue. #### Who are the source language producers? The source text producers were native French and native English volunteers (mainly British English). See the paper for very basic information concerning their backgrounds (age categories and experience in NLP). ### Annotations #### Annotation process On top of the original dialogue text (a mixture of utterances in English and in French), the following "annotations" are provided: - machine translated version of the original text (produced in real time and presented during the dialogue), produced by one of two MT systems, both trained using [Marian](https://github.com/marian-nmt/marian). - judgments of MT quality by participants (overall quality, particular problems, verbatim comments) - manually produced normalised version of the original text (for spelling mistakes, grammatical errors, missing punctuation, etc.) - manually produced reference translations #### Who are the annotators? The judgments of MT quality were produced by the dialogue participants themselves in real time. The normalised version of the text and the reference translations were manually produced by the authors of the paper. Translations were always done into the translator's native language and all translations were verified and post-edited by a bilingual English-French speaker. ### Personal and Sensitive Information A priori the dataset does not contain personal and sensitive information. Participants were instructed not to give any personal information and to assume the roles assigned in the role play scenario. Usernames were anonymised prior to distribution and any mention of either usernames or real names in the dialogues were replaced by generic names of the same gender as the participant. Only basic user information was collected to get an idea of the distribution of participants and to potentially see how multilingual ability influences quality judgments (rough age categories, experience in NLP or research, native languages, familiarity with the other language (either English or French), other languages spoken and gender). Gender was included because it is an important factor in translation (particularly for the direction English-to-French), and this was explained in advance to the participants in the FAQs. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was collected by Rachel Bawden, Eric Bilinski, Thomas Lavergne and Sophie Rosset (see citation below). ### Licensing Information The dataset is available under a CC BY-SA 4.0 licence. ### Citation Information If you use or are inspired by this dataset, please cite: ``` @article{bawden_DiaBLa:-A-Corpus-of_2021, author = {Bawden, Rachel and Bilinski, Eric and Lavergne, Thomas and Rosset, Sophie}, doi = {10.1007/s10579-020-09514-4}, title = {DiaBLa: A Corpus of Bilingual Spontaneous Written Dialogues for Machine Translation}, year = {2021}, journal = {Language Resources and Evaluation}, publisher = {Springer Verlag}, volume = {55}, pages = {635--660}, url = {https://hal.inria.fr/hal-03021633}, pdf = {https://hal.inria.fr/hal-03021633/file/diabla-lre-personal-formatting.pdf}, } ``` ### Contributions This dataset was added by Rachel Bawden [@rbawden](https://github.com/rbawden).
s3h/arabic-gec
2021-12-06T18:22:00.000Z
[ "region:us" ]
s3h
null
null
null
0
4
Entry not found
thomwolf/codeparrot
2021-07-16T15:21:29.000Z
[ "region:us" ]
thomwolf
null
null
null
0
4
Entry not found
thomwolf/github-python
2021-07-07T11:53:28.000Z
[ "region:us" ]
thomwolf
null
null
null
6
4
Entry not found
valurank/offensive-multi
2022-10-25T09:57:14.000Z
[ "task_categories:text-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:derived", "language:en", "license:other", "region:us" ]
valurank
null
null
null
0
4
--- language: - en license: other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - derived task_categories: - text-classification --- # Dataset Card for hate-multi ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) ## Dataset Description ### Dataset Summary This dataset contains a collection of text labeled as offensive (class 1) or not (class 0). ## Dataset Creation The dataset was creating by aggregating multiple publicly available datasets. ### Source Data The following datasets were used: * https://huggingface.co/datasets/hate_speech_offensive - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'hate speech' * https://sites.google.com/site/offensevalsharedtask/olid - Tweet text cleaned by lower casing, removing mentions and urls. Used 'subtask_a' column for labeling.
w-nicole/childes_data_no_tags
2021-06-19T18:39:07.000Z
[ "region:us" ]
w-nicole
null
null
null
0
4
Entry not found
w11wo/imdb-javanese
2022-10-25T10:01:48.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:jv", "license:odbl", "region:us" ]
w11wo
Large Movie Review Dataset translated to Javanese. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. We translated the original IMDB Dataset to Javanese using the multi-lingual MarianMT Transformer model from `Helsinki-NLP/opus-mt-en-mul`.
\ @InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http://www.aclweb.org/anthology/P11-1015} }
null
0
4
--- annotations_creators: - found language_creators: - machine-generated language: - jv license: - odbl multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification extended: - original --- # Dataset Card for "imdb-javanese" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits Sample Size](#data-instances-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Github](https://github.com/w11wo/nlp-datasets#javanese-imdb) - **Repository:** [Github](https://github.com/w11wo/nlp-datasets#javanese-imdb) - **Paper:** [Aclweb](http://www.aclweb.org/anthology/P11-1015) - **Point of Contact:** [Wilson Wongso](https://github.com/w11wo) - **Size of downloaded dataset files:** 17.0 MB - **Size of the generated dataset:** 47.5 MB - **Total amount of disk used:** 64.5 MB ### Dataset Summary Large Movie Review Dataset translated to Javanese. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. We translated the [original IMDB Dataset](https://huggingface.co/datasets/imdb) to Javanese using the multi-lingual MarianMT Transformer model from [`Helsinki-NLP/opus-mt-en-mul`](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure We show detailed information for up to 5 configurations of the dataset. ### Data Instances An example of `javanese_imdb_train.csv` looks as follows. | label | text | | ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1 | "Drama romantik sing digawé karo direktur Martin Ritt kuwi ora dingertèni, nanging ana momen-momen sing marahi karisma lintang Jane Fonda lan Robert De Niro (kelompok sing luar biasa). Dhèwèké dadi randha sing ora isa mlaku, iso anu anyar lan anyar-inventor-- kowé isa nganggep isiné. Adapsi novel Pat Barker ""Union Street"" (yak titel sing apik!) arep dinggo-back-back it on bland, lan pendidikan film kuwi gampang, nanging isih nyenengké; a rosy-hued-inventor-fantasi. Ora ana sing ngganggu gambar sing sejati ding kok iso dinggo nggawe gambar sing paling nyeneng." | | 0 | "Pengalaman wong lanang sing nduwé perasaan sing ora lumrah kanggo babi. Mulai nganggo tuladha sing luar biasa yaiku komedia. Wong orkestra termel digawé dadi wong gila, sing kasar merga nyanyian nyanyi. Sayangé, kuwi tetep absurd wektu WHOLE tanpa ceramah umum sing mung digawé. Malah, sing ana ing jaman kuwi kudu ditinggalké. Diyalog kryptik sing nggawé Shakespeare marah gampang kanggo kelas telu. Pak teknis kuwi luwih apik timbang kowe mikir nganggo cinematografi sing apik sing jenengé Vilmos Zsmond. Masa depan bintang Saly Kirkland lan Frederic Forrest isa ndelok." | ### Data Fields - `text`: The movie review translated into Javanese. - `label`: The sentiment exhibited in the review, either `1` (positive) or `0` (negative). ### Data Splits Sample Size | train | unsupervised | test | | ----: | -----------: | ----: | | 25000 | 50000 | 25000 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information If you use this dataset in your research, please cite: ``` @inproceedings{wongso2021causal, title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures}, author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin}, booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)}, pages={1--7}, year={2021}, organization={IEEE} } ``` ``` @InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http://www.aclweb.org/anthology/P11-1015} } ```
elkarhizketak
2023-01-25T15:03:45.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:eu", "license:cc-by-sa-4.0", "dialogue-qa", "region:us" ]
null
ElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
@inproceedings{otegi-etal-2020-conversational, title = "{Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for {B}asque}", author = "Otegi, Arantxa and Agirre, Aitor and Campos, Jon Ander and Soroa, Aitor and Agirre, Eneko", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", year = "2020", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.55", pages = "436--442", ISBN = "979-10-95546-34-4", }
null
1
4
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - eu license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa pretty_name: ElkarHizketak tags: - dialogue-qa dataset_info: features: - name: dialogue_id dtype: string - name: wikipedia_page_title dtype: string - name: background dtype: string - name: section_title dtype: string - name: context dtype: string - name: turn_ids sequence: string - name: questions sequence: string - name: yesnos sequence: class_label: names: '0': y '1': n '2': x - name: answers sequence: - name: texts sequence: string - name: answer_starts sequence: int32 - name: input_texts sequence: string - name: orig_answers struct: - name: texts sequence: string - name: answer_starts sequence: int32 config_name: plain_text splits: - name: train num_bytes: 1024378 num_examples: 301 - name: validation num_bytes: 125667 num_examples: 38 - name: test num_bytes: 127640 num_examples: 38 download_size: 1927474 dataset_size: 1277685 --- # Dataset Card for ElkarHizketak ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ElkarHizketak homepage](http://ixa.si.ehu.es/node/12934) - **Paper:** [Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque](https://aclanthology.org/2020.lrec-1.55/) - **Point of Contact:** [Arantxa Otegi](mailto:arantza.otegi@ehu.eus) ### Dataset Summary ElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section. ### Supported Tasks and Leaderboards - `extractive-qa`: The dataset can be used to train a model for Conversational Question Answering. ### Languages The text in the dataset is in Basque. ## Dataset Structure ### Data Instances An example from the train split: ``` {'dialogue_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d', 'wikipedia_page_title': 'Howard Becker', 'background': 'Howard Saul Becker (Chicago,Illinois, 1928ko apirilaren 18an) Estatu Batuetako soziologoa bat da. Bere ekarpen handienak desbiderakuntzaren soziologian, artearen soziologian eta musikaren soziologian egin ditu. "Outsiders" (1963) bere lanik garrantzitsuetako da eta bertan garatu zuen bere etiketatze-teoria. Nahiz eta elkarrekintza sinbolikoaren edo gizarte-konstruktibismoaren korronteen barruan sartu izan, berak ez du bere burua inongo paradigman kokatzen. Chicagoko Unibertsitatean graduatua, Becker Chicagoko Soziologia Eskolako bigarren belaunaldiaren barruan kokatu ohi da, Erving Goffman eta Anselm Strauss-ekin batera.', 'section_title': 'Hastapenak eta hezkuntza.', 'context': 'Howard Saul Becker Chicagon jaio zen 1928ko apirilaren 18an. Oso gazte zelarik piano jotzen asi zen eta 15 urte zituenean dagoeneko tabernetan aritzen zen pianoa jotzen. Beranduago Northwestern Unibertsitateko banda batean jo zuen. Beckerren arabera, erdi-profesional gisa aritu ahal izan zen Bigarren Mundu Gerra tokatu eta musikari gehienak soldadugai zeudelako. Musikari bezala egin zuen lan horretan egin zuen lehen aldiz drogaren kulturaren ezagutza, aurrerago ikerketa-gai hartuko zuena. 1946an bere graduazpiko soziologia titulua lortu zuen Chicagoko Unibertsitatean. Ikasten ari zen bitartean, pianoa jotzen jarraitu zuen modu erdi-profesionalean. Hala ere, soziologiako masterra eta doktoretza eskuratu zituen Chicagoko Unibertsitatean. Unibertsitate horretan Chicagoko Soziologia Eskolaren jatorrizko tradizioaren barruan hezia izan zen. Chicagoko Soziologia Eskolak garrantzi berezia ematen zion datu kualitatiboen analisiari eta Chicagoko hiria hartzen zuen ikerketa eremu bezala. Beckerren hasierako lan askok eskola honen tradizioaren eragina dute, bereziko Everett C. Hughes-en eragina, bere tutore eta gidari izan zena. Askotan elkarrekintzaile sinboliko bezala izendatua izan da, nahiz eta Beckerek berak ez duen gogoko izendapen hori. Haren arabera, bere leinu akademikoa Georg Simmel, Robert E. Park eta Everett Hughes dira. Doktoretza lortu ostean, 23 urterekin, Beckerrek marihuanaren erabilpena ikertu zuen "Institut for Juvenil Reseac"h-en. Ondoren Illinoisko Unibertsitatean eta Standfor Unibertsitateko ikerketa institutu batean aritu zen bere irakasle karrera hasi aurretik. CANNOTANSWER', 'turn_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d_q#0', 'question': 'Zer da desbiderakuntzaren soziologia?', 'yesno': 2, 'answers': {'text': ['CANNOTANSWER'], 'answer_start': [1601], 'input_text': ['CANNOTANSWER']}, 'orig_answer': {'text': 'CANNOTANSWER', 'answer_start': 1601}} ``` ### Data Fields The different fields are: - `dialogue_id`: string, - `wikipedia_page_title`: title of the wikipedia page as a string, - `background`: string, - `section_title`: title os the section as a string, - `context`: context of the question as a string string, - `turn_id`: string, - `question`: question as a string, - `yesno`: Class label that represents if the question is a yes/no question. Possible values are "y" (0), "n" (1), "x" (2), - `answers`: a dictionary with three fields: - `text`: list of texts of the answer as a string, - `answer_start`: list of positions of the answers in the context as an int32, - `input_text`: list of strings, } ), - `orig_answer`: { - `text`: original answer text as a string, - `answer_start`: original position of the answer as an int32, }, ### Data Splits The data is split into a training, development and test set. The split sizes are as follow: - train: 1,306 questions / 301 dialogues - development: 161 questions / 38 dialogues - test: 167 questions / 38 dialogues ## Dataset Creation ### Curation Rationale This is the first non-English conversational QA dataset and the first conversational dataset for Basque. Its small size presents a realistic low-resource scenario for conversational QA systems. ### Source Data #### Initial Data Collection and Normalization First we selected sections of Wikipedia articles about people, as less specialized knowledge is required to converse about people than other categories. In order to retrieve articles we selected the following categories in Basque Wikipedia: Biografia (Biography is the equivalent category in English Wikipedia), Biografiak (People) and Gizabanako biziak (Living people). We applied this category filter and downloaded the articles using a querying tool provided by the Wikimedia foundation. Once we retrieved the articles, we selected sections from them that contained between 175 and 300 words. These filters and threshold were set after some pilot studies where we check the adequacy of the people involved in the selected articles and the length of the passages in order to have enough but not to much information to hold a conversation. Then, dialogues were collected during some online sessions that we arranged with Basque speaking volunteers. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section. #### Who are the source language producers? The language producers are Basque speaking volunteers which hold a conversation using a text-based chat interface developed for those purposes. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by Arantxa Otegi, Jon Ander Campos, Aitor Soroa and Eneko Agirre from the [HiTZ Basque Center for Language Technologies](https://www.hitz.eus/) and [Ixa NLP Group](https://www.ixa.eus/) at the University of the Basque Country (UPV/EHU). ### Licensing Information Copyright (C) by Ixa Taldea, University of the Basque Country UPV/EHU. This dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0). To view a copy of this license, visit [https://creativecommons.org/licenses/by-sa/4.0/legalcode](https://creativecommons.org/licenses/by-sa/4.0/legalcode). ### Citation Information If you are using this dataset in your work, please cite this publication: ```bibtex @inproceedings{otegi-etal-2020-conversational, title = "{Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque}", author = "Otegi, Arantxa and Agirre, Aitor and Campos, Jon Ander and Soroa, Aitor and Agirre, Eneko", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.55", pages = "436--442" } ``` ### Contributions Thanks to [@antxa](https://github.com/antxa) for adding this dataset.
gustavecortal/fr_covid_news
2022-10-20T19:01:24.000Z
[ "task_categories:text-classification", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:multi-class-classification", "task_ids:language-modeling", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K...
gustavecortal
null
null
null
1
4
--- annotations_creators: - machine-generated language_creators: - found language: - fr language_bcp47: - fr-FR license: - unknown multilinguality: - monolingual pretty_name: COVID-19 French News dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification - sequence-modeling - conditional-text-generation task_ids: - topic-classification - multi-label-classification - multi-class-classification - language-modeling - summarization - other-stuctured-to-text --- # Dataset Card for COVID-19 French News dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The COVID-19 French News dataset is a French-language dataset containing just over 40k unique news articles from more than 50 different French-speaking online newspapers. The dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an integrated web crawler and information extractor for news. The current version supports abstractive summarization and topic classification. Dataset Card not finished yet. ### Languages The text in the dataset is in French. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `title`: title of the article - `description`: description or a summary of the article - `text`: the actual article text in raw form - `domain`: source domain of the article (i.e. lemonde.fr) - `url`: article URL, the original URL where it was scraped - `labels`: classification labels ## Data Splits COVID-19 French News dataset has only the training set, i.e. it has to be loaded with train split specified: fr_covid_news = load_dataset('gustavecortal/fr_covid_news', split="train") ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? ### Annotations #### Annotation process [More Information Needed] ### Personal and Sensitive Information As one can imagine, data contains contemporary public figures or individuals who appeared in the news. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help researchers develop better French topic classification and abstractive summarization models for news related to COVID-19. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The data was originally collected by Gustave Cortal (gustavecortal@gmail.com) ### Licensing Information Usage of the dataset is restricted to non-commercial research purposes only. ### Citation Information ``` @dataset{fr_covid_news, author = {Gustave Cortal}, year = {2022}, title = {COVID-19 - French News Dataset}, url = {https://www.gustavecortal.com} } ``` ### Contributions [@gustavecortal](https://github.com/gustavecortal)
nielsr/rvl-cdip-demo
2022-03-08T09:01:24.000Z
[ "region:us" ]
nielsr
null
null
null
0
4
Entry not found
victor/autonlp-data-tweet-sentiment
2022-10-25T10:03:17.000Z
[ "task_categories:text-classification", "language:en", "region:us" ]
victor
null
null
null
2
4
--- language: - en task_categories: - text-classification --- # AutoNLP Dataset for project: tweet-sentiment ## Table of content - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Descritpion This dataset has been automatically processed by AutoNLP for project tweet-sentiment. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "I am going to see how long I can do this for.", "target": 8 }, { "text": "@anitabora yeah, right. What if our politicians start using uploading their pics, lots of inside sto[...]", "target": 8 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=13, names=['anger', 'boredom', 'empty', 'enthusiasm', 'fun', 'happiness', 'hate', 'love', 'neutral', 'relief', 'sadness', 'surprise', 'worry'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 31995 | | valid | 8005 |
nreimers/trec-covid-generated-queries
2022-03-23T12:56:58.000Z
[ "region:us" ]
nreimers
null
null
null
0
4
Entry not found
sentence-transformers/NQ-retrieval
2022-03-24T08:18:36.000Z
[ "region:us" ]
sentence-transformers
null
null
null
0
4
#NQ-retrieval This is a nicely formatted version of the [Natural Questions](https://ai.google.com/research/NaturalQuestions/) dataset, formatted to train and evaluate retrieval systems. Each row contains the following entries: - **question**: Original question send for Google Search Engine - **title**: Title of Wikipedia article - **candidates**: A list with the passages from the original Wikipedia HTML document - **passage_types**: Types (text, table, list) of the candidate passages - **long_answers**: IDs which candidate passages where selected as relevant from annotators. Might be empty if no relevant passage has been identified - **document_url**
laion/laion5B-index
2022-12-20T23:38:08.000Z
[ "license:cc-by-4.0", "region:us" ]
laion
null
null
null
14
4
--- license: cc-by-4.0 --- See https://github.com/rom1504/clip-retrieval/blob/main/docs/laion5B_back.md for documentation on usage
laion/laion2B-en-joined
2022-03-31T07:44:37.000Z
[ "license:cc-by-4.0", "region:us" ]
laion
null
null
null
6
4
--- license: cc-by-4.0 ---
h4iku/coconut_java2006
2023-09-28T22:53:23.000Z
[ "code", "region:us" ]
h4iku
null
null
null
0
4
--- tags: - code pretty_name: CoCoNuT-Java(2006) --- # Dataset Card for CoCoNuT-Java(2006) ## Dataset Description - **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0) - **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact) - **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369) ### Dataset Summary Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper. These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized. The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset. ### Languages - Java ## Dataset Structure ### Data Fields The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`. These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`. ### Data Instances There is a mapping between the 4 columns for each instance. For example: 5 first rows of `rem` (i.e., the buggy line/hunk): ``` 1 public synchronized StringBuffer append(char ch) 2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; 3 public String substring(int beginIndex, int endIndex) 4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length); 5 public Object next() { ``` 5 first rows of add (i.e., the fixed line/hunk): ``` 1 public StringBuffer append(Object obj) 2 return append(obj == null ? "null" : obj.toString()); 3 public String substring(int begin) 4 return substring(begin, count); 5 public FSEntry next() { ``` These map to the 5 instances: ```diff - public synchronized StringBuffer append(char ch) + public StringBuffer append(Object obj) ``` ```diff - ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; + return append(obj == null ? "null" : obj.toString()); ``` ```diff - public String substring(int beginIndex, int endIndex) + public String substring(int begin) ``` ```diff - if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length); + return substring(begin, count); ``` ```diff - public Object next() { + public FSEntry next() { ``` `context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments). For example, the context of ``` public synchronized StringBuffer append(char ch) ``` is its associated function: ```java public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; } ``` `meta` contains some metadata about the project: ``` 1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java ``` `1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project `core/src/classpath/java/java/lang/StringBuffer.java` | Number of projects | Number of Instances | | ------------------ |-------------------- | | 45,180 | 3,241,966 | ## Dataset Creation ### Curation Rationale Data is collected to train automated program repair (APR) models. ### Citation Information ```bib @inproceedings{lutellierCoCoNuTCombiningContextaware2020, title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair}, shorttitle = {{{CoCoNuT}}}, booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}}, author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin}, year = {2020}, month = jul, series = {{{ISSTA}} 2020}, pages = {101--114}, publisher = {{Association for Computing Machinery}}, address = {{New York, NY, USA}}, doi = {10.1145/3395363.3397369}, url = {https://doi.org/10.1145/3395363.3397369}, urldate = {2022-12-06}, isbn = {978-1-4503-8008-9}, keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation} } ```
UrukHan/t5-russian-summarization
2022-04-02T18:07:55.000Z
[ "region:us" ]
UrukHan
null
null
null
2
4
Entry not found
ramnika003/autotrain-data-sentiment_analysis_project
2022-04-05T09:16:59.000Z
[ "task_categories:text-classification", "region:us" ]
ramnika003
null
null
null
0
4
--- task_categories: - text-classification --- # AutoTrain Dataset for project: sentiment_analysis_project ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project sentiment_analysis_project. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "Realizing that I don`t have school today... or tomorrow... or for the next few months. I really nee[...]", "target": 1 }, { "text": "Good morning tweeps. Busy this a.m. but not in a working way", "target": 2 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=3, names=['negative', 'neutral', 'positive'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 16180 | | valid | 4047 |
chainyo/rvl-cdip
2022-04-06T16:49:20.000Z
[ "license:other", "region:us" ]
chainyo
null
null
null
2
4
--- license: other --- The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. For questions and comments please contact Adam Harley (aharley@scs.ryerson.ca). The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/). ## Labels 0: advertissement 1: budget 2: email 3: file folder 4: form 5: handwritten 6: invoice 7: letter 8: memo 9: news article 10: presentation 11: questionnaire 12: resume 13: scientific publication 14: scientific report 15: specification ## Citation This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015` ## License RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/). ## References 1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006 2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/.
Jeneral/fer-2013
2022-04-06T18:24:30.000Z
[ "license:apache-2.0", "region:us" ]
Jeneral
null
@TECHREPORT{FER2013 dataset, author = {Prince Awuah Baffour}, title = {Facial Emotion Detection}, institution = {}, year = {2022} }
null
1
4
--- license: apache-2.0 ---
dandelin/imagenet
2022-04-12T05:58:04.000Z
[ "region:us" ]
dandelin
null
null
null
0
4
Entry not found
mwong/fever-evidence-related
2022-10-25T10:06:51.000Z
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|fever", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "region:...
mwong
null
null
null
1
4
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gpl-3.0 multilinguality: - monolingual paperswithcode_id: fever pretty_name: fever size_categories: - 100K<n<1M source_datasets: - extended|fever task_categories: - text-classification task_ids: - fact-checking --- ### Dataset Summary This dataset is extracted from Fever dataset (https://fever.ai), pre-processed and ready to train and evaluate. The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim.
mwong/climate-evidence-related
2022-10-25T10:06:54.000Z
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_fever", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", ...
mwong
null
null
null
2
4
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gpl-3.0 multilinguality: - monolingual paperswithcode_id: climate-fever pretty_name: climate-fever size_categories: - 100K<n<1M source_datasets: - extended|climate_fever task_categories: - text-classification task_ids: - fact-checking --- ### Dataset Summary This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and ready to train and evaluate. The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim.
mnazari/urmi-assyrian-voice
2023-09-22T05:31:05.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:Geoffrey Khan", "annotations_creators:Matthew Nazari", "language:aii", "license:cc0-1.0", "region:us" ]
mnazari
null
null
null
0
4
--- license: cc0-1.0 pretty_name: Assyrian annotations_creators: - Geoffrey Khan - Matthew Nazari language: aii size_category: n<1K task_categories: - automatic-speech-recognition --- # 🏴‍☠️ **_This dataset is depricated! Please see NENA Speech_** 🏴‍☠️ # Dataset Card for urmi_assyrian_voice ## Dataset Description The Urmi Assyrian Voice dataset is parsed from the research and fieldwork of [Geoffrey Khan](https://cambridge.academia.edu/GeoffreyKhan) which is made public through the [North-Eastern Neo-Aramaic Database Project](https://nena.ames.cam.ac.uk/dialects/225/audio). Annotation corrections as well as parsing was performed by Matthew Nazari. ## Dataset Summary This dataset contains labelled audio examples of the Urmi dialect of North-Eastern Neo-Aramaic. The dataset only consists of one female speaker in her late seventies. Note that you will need to normalize the utterances for machine learning tasks (clean punctuation, remove accents, etc).
Yaxin/SemEval2015Task12Raw
2022-08-14T16:01:41.000Z
[ "region:us" ]
Yaxin
A collection of SemEval2015 specifically designed to aid research in Aspect Based Sentiment Analysis.
@inproceedings{pontiki2015semeval, title={Semeval-2015 task 12: Aspect based sentiment analysis}, author={Pontiki, Maria and Galanis, Dimitrios and Papageorgiou, Harris and Manandhar, Suresh and Androutsopoulos, Ion}, booktitle={Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015)}, pages={486--495}, year={2015} }
null
2
4
Entry not found
Goud/Goud-sum
2022-07-04T16:02:36.000Z
[ "task_categories:summarization", "task_ids:news-articles-headline-generation", "annotations_creators:no-annotation", "language_creators:machine-generated", "size_categories:100K<n<1M", "source_datasets:original", "region:us" ]
Goud
null
null
null
2
4
--- annotations_creators: - no-annotation language_creators: - machine-generated language: [] license: [] multilinguality: [] pretty_name: Goud-sum size_categories: - 100K<n<1M source_datasets: - original task_categories: - summarization task_ids: - news-articles-headline-generation --- # Dataset Card for Goud summarization dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[Needs More Information] - **Repository:**[Needs More Information] - **Paper:**[Goud.ma: a News Article Dataset for Summarization in Moroccan Darija](https://openreview.net/forum?id=BMVq5MELb9) - **Leaderboard:**[Needs More Information] - **Point of Contact:**[Needs More Information] ### Dataset Summary Goud-sum contains 158k articles and their headlines extracted from [Goud.ma](https://www.goud.ma/) news website. The articles are written in the Arabic script. All headlines are in Moroccan Darija, while articles may be in Moroccan Darija, in Modern Standard Arabic, or a mix of both (code-switched Moroccan Darija). ### Supported Tasks and Leaderboards Text Summarization ### Languages * Moroccan Arabic (Darija) * Modern Standard Arabic ## Dataset Structure ### Data Instances The dataset consists of article-headline pairs in string format. ### Data Fields * article: a string containing the body of the news article * headline: a string containing the article's headline * categories: a list of string of article categories ### Data Splits Goud-sum dataset has 3 splits: _train_, _validation_, and _test_. Below are the number of instances in each split. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 139,288 | | Validation | 9,497 | | Test | 9,497 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The text was written by journalists at [Goud](https://www.goud.ma/). ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{issam2022goudma, title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija}, author={Abderrahmane Issam and Khalil Mrini}, booktitle={3rd Workshop on African Natural Language Processing}, year={2022}, url={https://openreview.net/forum?id=BMVq5MELb9} } ``` ### Contributions Thanks to [@issam9](https://github.com/issam9) and [@KhalilMrini](https://github.com/KhalilMrini) for adding this dataset.
patrickvonplaten/librispeech_asr_self_contained
2022-10-24T17:48:37.000Z
[ "task_categories:automatic-speech-recognition", "task_categories:audio-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:e...
patrickvonplaten
LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.87
@inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} }
null
0
4
--- pretty_name: LibriSpeech annotations_creators: - expert-generated language_creators: - crowdsourced - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual paperswithcode_id: librispeech-1 size_categories: - 100K<n<1M source_datasets: - original task_categories: - automatic-speech-recognition - audio-classification task_ids: - audio-speaker-identification --- # Dataset Card for librispeech_asr ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12) - **Repository:** [Needs More Information] - **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf) - **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-other) - **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com) ### Dataset Summary LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. ### Supported Tasks and Leaderboards - `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean and ranks models based on their WER. ### Languages The audio is in English. There are two configurations: `clean` and `other`. The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on a different dataset, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other". ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided. ``` {'chapter_id': 141231, 'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac', 'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'id': '1272-141231-0000', 'speaker_id': 1272, 'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'} ``` ### Data Fields - file: A path to the downloaded audio file in .flac format. - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - text: the transcription of the audio file. - id: unique id of the data sample. - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples. - chapter_id: id of the audiobook chapter which includes the transcription. ### Data Splits The size of the corpus makes it impractical, or at least inconvenient for some users, to distribute it as a single large archive. Thus the training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively. A simple automatic procedure was used to select the audio in the first two sets to be, on average, of higher recording quality and with accents closer to US English. An acoustic model was trained on WSJ’s si-84 data subset and was used to recognize the audio in the corpus, using a bigram LM estimated on the text of the respective books. We computed the Word Error Rate (WER) of this automatic transcript relative to our reference transcripts obtained from the book texts. The speakers in the corpus were ranked according to the WER of the WSJ model’s transcripts, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other". For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360 respectively accounting for 100h and 360h of the training data. For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech. | | Train.500 | Train.360 | Train.100 | Valid | Test | | ----- | ------ | ----- | ---- | ---- | ---- | | clean | - | 104014 | 28539 | 2703 | 2620| | other | 148688 | - | - | 2864 | 2939 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
bigscience-data/roots_es_uncorpus
2022-12-12T10:59:42.000Z
[ "language:es", "license:cc-by-4.0", "region:us" ]
bigscience-data
null
null
null
0
4
--- language: es license: cc-by-4.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_es_uncorpus # uncorpus - Dataset uid: `uncorpus` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.8023 % of total - 10.7390 % of ar - 5.7970 % of fr - 9.7477 % of es - 2.0417 % of en - 1.2540 % of zh ### BigScience processing steps #### Filters applied to: ar - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
AndresPitta/sg-reports_labeled
2022-10-25T10:08:57.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en-US", "license:unknown", "region:us" ]
AndresPitta
null
null
null
0
4
--- annotations_creators: - expert-generated language_creators: - machine-generated language: - en-US license: - unknown multilinguality: - monolingual pretty_name: Gender language in the reports of the secretary general 2020-2021 size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact: Andrés Pitta: andres.pitta@un.org** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
strombergnlp/danfever
2022-10-25T21:42:40.000Z
[ "task_categories:text-classification", "task_ids:fact-checking", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:da", "license:cc-by-4.0", "...
strombergnlp
\
@inproceedings{norregaard-derczynski-2021-danfever, title = "{D}an{FEVER}: claim verification dataset for {D}anish", author = "N{\o}rregaard, Jeppe and Derczynski, Leon", booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)", month = may # " 31--2 " # jun, year = "2021", address = "Reykjavik, Iceland (Online)", publisher = {Link{\"o}ping University Electronic Press, Sweden}, url = "https://aclanthology.org/2021.nodalida-main.47", pages = "422--428", abstract = "We present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.", }
null
2
4
--- annotations_creators: - expert-generated language_creators: - found language: - da license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - fact-checking - natural-language-inference paperswithcode_id: danfever pretty_name: DanFEVER tags: - knowledge-verification --- # Dataset Card for DanFEVER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [https://github.com/StrombergNLP/danfever](https://github.com/StrombergNLP/danfever) - **Repository:** [https://stromberg.ai/publication/danfever/](https://stromberg.ai/publication/danfever/) - **Paper:** [https://aclanthology.org/2021.nodalida-main.47/](https://aclanthology.org/2021.nodalida-main.47/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Leon Derczynski](mailto:leod@itu.dk) - **Size of downloaded dataset files:** 2.82 MiB - **Size of the generated dataset:** 2.80 MiB - **Total amount of disk used:** 5.62 MiB ### Dataset Summary We present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language. ### Supported Tasks and Leaderboards This dataset supports the FEVER task, but in Danish. * PwC leaderboard: [Fact Verification on DanFEVER](https://paperswithcode.com/sota/fact-verification-on-danfever) ### Languages This dataset is in Danish; the bcp47 is `da_DK`. ## Dataset Structure ### Data Instances ``` { 'id': '0', 'claim': 'Den 31. oktober 1920 opdagede Walter Baade kometen (944) Hidalgo i det ydre solsystem.', 'label': 0, 'evidence_extract': '(944) Hidalgo (oprindeligt midlertidigt navn: 1920 HZ) er en mørk småplanet med en diameter på ca. 50 km, der befinder sig i det ydre solsystem. Objektet blev opdaget den 31. oktober 1920 af Walter Baade. En asteroide (småplanet, planetoide) er et fast himmellegeme, hvis bane går rundt om Solen (eller en anden stjerne). Pr. 5. maj 2017 kendes mere end 729.626 asteroider og de fleste befinder sig i asteroidebæltet mellem Mars og Jupiter.', 'verifiable': 1, 'evidence': 'wiki_26366, wiki_12289', 'original_id': '1' } ``` ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale A dump of the Danish Wikipedia of 13 February 2020 was stored as well as the relevant articles from Den Store Danske (excerpts only, to comply with copyright laws). Two teams of two people independently sampled evidence, and created and annotated claims from these two sites. ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? The source language is from Wikipedia contributors editors and from dictionary contributors and editors. ### Annotations #### Annotation process Detailed in [this paper](http://www.derczynski.com/papers/danfever.pdf). #### Who are the annotators? The annotators are native Danish speakers and masters students of IT; two female, two male, ages 25-35. ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to enable construction of fact-checking systems in Danish. A system that succeeds at this may be able to identify questionable conclusions or inferences. ### Discussion of Biases The data is drawn from relatively formal topics, and so may perform poorly outside these areas. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information The data here is licensed CC-BY 4.0. If you use this data, you MUST state its origin. ### Citation Information Refer to this work as: > Nørregaard and Derczynski (2021). "DanFEVER: claim verification dataset for Danish", Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa). Bibliographic reference: ```` @inproceedings{norregaard-derczynski-2021-danfever, title = "{D}an{FEVER}: claim verification dataset for {D}anish", author = "N{\o}rregaard, Jeppe and Derczynski, Leon", booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)", year = "2021", publisher = {Link{\"o}ping University Electronic Press, Sweden}, url = "https://aclanthology.org/2021.nodalida-main.47", pages = "422--428" } ```
gusevski/factrueval2016
2022-04-29T20:34:48.000Z
[ "arxiv:2005.00614", "region:us" ]
gusevski
null
null
null
0
4
# Dataset Card for FactRuEval-2016 ## Dataset Description - **Point of Contact:** [Guskov Sergey](https://gusevski.com) ### Dataset Summary Evaluation of [Named Entity Recognition](https://www.dialog-21.ru/media/3430/starostinaetal.pdf) and Fact Extraction Systems for Russian. ### Supported Tasks and Leaderboards For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`). - `token-classification`: The dataset can be used to train a model for [NER], which consists in [Token Classification]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name). ### Languages RU. ## Dataset Structure ### Data Instances Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples. ``` { 'data': [{'id':'', 'tokens':[], 'ner_tags':[]},...], ... } ``` Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit. ### Data Fields List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points. - `id`: order id - `tokens`: list of tokens - `ner_tags`: list of ner tags ### Data Splits Describe and name the splits in the dataset if there are more than one. Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example: | | Tain | Valid | Test | | ----- | ------ | ----- | ---- | | Input Sentences | | | | | Average Sentence Length | | | | ## Dataset Creation ### Curation Rationale What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together? ### Source Data This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...) #### Initial Data Collection and Normalization Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process. If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name). If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used. #### Who are the source language producers? State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data. If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. Describe other people represented or mentioned in the data. Where possible, link to references for the information. ### Annotations If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs. #### Annotation process If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes. #### Who are the annotators? If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated. Describe the people or systems who originally created the annotations and their selection criteria if applicable. If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. ### Personal and Sensitive Information State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data). State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history). If efforts were made to anonymize the data, describe the anonymization process. ## Considerations for Using the Data ### Social Impact of Dataset Please discuss some of the ways you believe the use of this dataset will impact society. The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations. Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here. ### Discussion of Biases Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact. For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic. If analyses have been run quantifying these biases, please add brief summaries and links to the studies here. ### Other Known Limitations If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here. ## Additional Information ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information MIT
NazaGara/wikiner-es
2022-08-14T15:01:57.000Z
[ "region:us" ]
NazaGara
Dataset used to train a NER model
@inproceedings{, title = "", author = "Garagiola, Nazareno", year = "2022", url = "" }
null
0
4
annotations_creators: - automatic language_creators: - found languages: - es-AR licenses: - cc0-1.0 multilinguality: - monolingual paperswithcode_id: pretty_name: wikiner size_categories: - 100K<n<1M source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition --- license: cc --- # Dataset Card for wikiner ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Informatio] - **Paper:** [Learning multilingual named entity recognition from Wikipedia](https://doi.org/10.1016/j.artint.2012.03.006) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [NazaGara](ngaragiola430@mi.unc.edu.ar) ### Dataset Summary Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] . ### Supported Tasks and Leaderboards Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English. After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used. - `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data. This dataset was used in order to train a Spanish NER model using [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased). ### Languages The only supported language is spanish (es). ## Dataset Structure ### Data Fields The dictionary to map the id to the Label names is: { 0: 'O', 1: 'B-PER', 2: 'I-PER', 3: 'B-ORG', 4: 'I-ORG', 5: 'B-LOC', 6: 'I-LOC', 7: 'B-MISC', 8: 'I-MISC' } ### Data Splits The only split is the train split. Number of examples = 128355 ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? Created by Nothman et al. at 2013. ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
wikimedia/wit_base
2022-11-04T15:09:33.000Z
[ "task_categories:image-to-text", "task_categories:text-retrieval", "task_ids:image-captioning", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "source_datasets:extended|wikipedia", "langua...
wikimedia
null
null
null
13
4
--- annotations_creators: - machine-generated language_creators: - found language: - af - an - ar - arz - ast - az - azb - ba - bar - be - bg - bn - br - bs - ca - ce - ceb - ckb - cs - cv - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gl - hi - hr - hsb - ht - hu - hy - ia - id - io - is - it - iw - ja - jv - ka - kk - kn - ko - la - lah - lb - lmo - lt - lv - mg - mk - ml - mn - mr - ms - my - nan - nds - ne - nl - nn - 'no' - nv - oc - pa - pl - pt - qu - ro - ru - sco - si - sk - sl - sq - sr - sv - sw - ta - te - tg - th - tr - tt - uk - ur - uz - vec - vi - vo - war - xmf - yue - zh license: - cc-by-sa-4.0 multilinguality: - multilingual size_categories: - 1M<n<10M source_datasets: - original - extended|wikipedia task_categories: - image-to-text - text-retrieval task_ids: - image-captioning paperswithcode_id: wit pretty_name: Wikipedia-based Image Text language_bcp47: - af - an - ar - arz - ast - az - azb - ba - bar - be - be-tarask - bg - bn - br - bs - ca - ce - ceb - ckb - cs - cv - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gl - hi - hr - hsb - ht - hu - hy - ia - id - io - is - it - iw - ja - jv - ka - kk - kn - ko - la - lah - lb - lmo - lt - lv - mg - mk - ml - mn - mr - ms - my - nan - nds - ne - nl - nn - 'no' - nv - oc - pa - pl - pt - qu - ro - ru - sco - si - sk - sl - sq - sr - sr-Latn - sv - sw - ta - te - tg - th - tr - tt - uk - ur - uz - vec - vi - vo - war - xmf - yue - zh - zh-TW tags: - text-image-retrieval --- # Dataset Card for WIT ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WIT homepage](https://github.com/google-research-datasets/wit) - **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913) - **Leaderboard:** [WIT leaderboard](https://paperswithcode.com/sota/text-image-retrieval-on-wit) and [WIT Kaggle competition](https://www.kaggle.com/competitions/wikipedia-image-caption/leaderboard) - **Point of Contact:** [Miriam Redi](mailto:miriam@wikimedia.org) ### Dataset Summary Wikimedia's version of the Wikipedia-based Image Text (WIT) Dataset, a large multimodal multilingual dataset. From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/): > The core training data is taken from the Wikipedia Image-Text (WIT) Dataset, a large curated set of more than 37 million image-text associations extracted from Wikipedia articles in 108 languages that was recently released by Google Research. > > The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images. However, due to licensing and data volume issues, the Google dataset only provides the image name and corresponding URL for download and not the raw image files. > > Getting easy access to the image files is crucial for participants to successfully develop competitive models. Therefore, today, the Wikimedia Research team is releasing its first large image dataset. It contains more than six million image files from Wikipedia articles in 100+ languages, which correspond to almost [1] all captioned images in the WIT dataset. Image files are provided at a 300-px resolution, a size that is suitable for most of the learning frameworks used to classify and analyze images. > [1] We are publishing all images having a non-null “reference description” in the WIT dataset. For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset. **Note**: Compared to [Google's version](https://huggingface.co/datasets/google/wit), which has contents of one Wikipedia page per data sample, this version groups contents of all Wikipedia pages available in different languages for the image in one single data sample to avoid duplication of image bytes. ### Supported Tasks and Leaderboards - `image-captioning`: This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image. - `text-retrieval`: The goal in this task is to build a model that retrieves the text (`caption_title_and_reference_description`) closest to an image. The leaderboard for this task can be found [here](https://paperswithcode.com/sota/text-image-retrieval-on-wit). This task also has a competition on [Kaggle](https://www.kaggle.com/c/wikipedia-image-caption). In these tasks, any combination of the `caption_reference_description`, `caption_attribution_description` and `caption_alt_text_description` fields can be used as the input text/caption. ### Languages The dataset contains examples from all Wikipedia languages. ## Dataset Structure ### Data Instances Each instance is an image, its representation in bytes, a pre-computed embedding, and the set of captions attached to the image in Wikipedia. ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=300x225 at 0x7F88F3876358>, 'image_url': 'https://upload.wikimedia.org/wikipedia/commons/8/8b/Scolopendra_gigantea.jpg', 'embedding': [1.4784087, 2.8710432, 0.0, 0.51603067, ..., 10.266883, 0.51142216, 0.0, 2.3464653], 'metadata_url': 'http://commons.wikimedia.org/wiki/File:Scolopendra_gigantea.jpg', 'original_height': 3000, 'original_width': 4000, 'mime_type': 'image/jpeg', 'caption_attribution_description': 'English: Puerto Rican Giant Centipede, Scolopendra gigantea; Vieques, Puerto Rico Slovenčina: Stonožka obrovská, Scolopendra gigantea; Vieques, Portoriko', 'wit_features': { 'language': ['ro', 'vi', 'sk', ..., 'nl', 'th', 'lv'], 'page_url': ['https://ro.wikipedia.org/wiki/Scolopendra_gigantea', 'https://vi.wikipedia.org/wiki/Scolopendra_gigantea', 'https://sk.wikipedia.org/wiki/Scolopendra_gigantea', ..., 'https://nl.wikipedia.org/wiki/Scolopendra_gigantea', 'https://th.wikipedia.org/wiki/%E0%B8%95%E0%B8%B0%E0%B8%82%E0%B8%B2%E0%B8%9A%E0%B8%A2%E0%B8%B1%E0%B8%81%E0%B8%A9%E0%B9%8C%E0%B8%82%E0%B8%B2%E0%B9%80%E0%B8%AB%E0%B8%A5%E0%B8%B7%E0%B8%AD%E0%B8%87%E0%B9%80%E0%B8%9B%E0%B8%A3%E0%B8%B9', 'https://lv.wikipedia.org/wiki/Skolopendru_dzimta'], 'attribution_passes_lang_id': [True, True, True, ..., True, True, True], 'caption_alt_text_description': [None, None, None, ..., 'Scolopendra gigantea', None, 'Milzu skolopendra (Scolopendra gigantea)'], 'caption_reference_description': [None, None, None, ..., None, None, 'Milzu skolopendra (Scolopendra gigantea)'], 'caption_title_and_reference_description': [None, 'Scolopendra gigantea [SEP] ', None, ..., 'Scolopendra gigantea [SEP] ', None, 'Skolopendru dzimta [SEP] Milzu skolopendra (Scolopendra gigantea)'], 'context_page_description': ['Scolopendra gigantea este un miriapod din clasa Chilopoda, fiind cel mai mare reprezentant al genului Scolopendra. Adultul poate atinge o lungime de 26 cm, uneori depășind 30 cm. Această specie habitează în regiunile de nord și de vest a Americii de Sud, pe insulele Trinidad, insulele Virgine, Jamaica Hispaniola ș.a. Localnicii denumesc scolopendra chilopodul gigant galben și chilopodul gigant amazonian.', 'Scolopendra gigantea là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26 cm và có thể vượt quá 30 cm. Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', 'Scolopendra gigantea, starší slovenský nazov: štípavica veľká, je živočích z rodu Scolopendra, s veľkosťou do 30 cm.', ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', 'ตะขาบยักษ์ขาเหลืองเปรู หรือ ตะขาบยักษ์อเมซอน เป็นตะขาบชนิดที่มีขนาดใหญ่ที่สุดในสกุล Scolopendra โดยปกติเมื่อโตเต็มที่จะยาว 26 เซนติเมตร แต่บางครั้งก็สามารถโตได้ถึง 30 เซนติเมตร ตะขาบชนิดนี้อาศัยอยู่ทางแถบเหนือและตะวันตกของทวีปอเมริกาใต้ และตามเกาะแก่งของประเทศตรินิแดดและจาไมกา เป็นสัตว์กินเนื้อ โดยกินจิ้งจก, กบ, นก, หนู และแม้แต่ค้างคาวเป็นอาหาร และขึ้นชื่อในเรื่องความดุร้าย', 'Skolpendru dzimta pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'], 'context_section_description': [None, 'Scolopendra gigantea (còn được gọi là Rết chân vàng khổng lồ Peru và Rết khổng lồ Amazon) là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26\xa0cm (10\xa0in) và có thể vượt quá 30\xa0cm (12\xa0in). Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', None, ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', None, 'Skolpendru dzimta (Scolopendridae) pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'], 'hierarchical_section_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'], 'is_main_image': [True, True, True, ..., True, True, True], 'page_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'], 'section_title': [None, None, None, ..., None, None, None] } } ``` **Note**: The dataset is stored in Parquet for better performance. This dataset was generated from the original files using [this script](wit_base/blob/main/scripts/wit.py). Additionally, 120 examples from the original files have incorrectly formatted one or more of the following fields: `original_height`, `original_width`, `mime_type` and `caption_attribution_description`. The fixed versions of these examples that were used in the generation script can be found [here](wit_base/blob/main/scripts/corrected_examples.py). ### Data Fields - `image`: A `PIL.Image.Image` object containing the image resized to a width of 300-px while preserving its aspect ratio. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `image_url`: URL to wikipedia image - `embedding`: Precomputed image embedding. Each image is described with a 2048-dimensional signature extracted from the second-to-last layer of a [ResNet-50](https://arxiv.org/abs/1512.03385) neural network trained with [Imagenet](https://www.image-net.org/) data. These embeddings contain rich information about the image content and layout, in a compact form - `metadata_url`: URL to wikimedia page containing the image and the metadata - `original_height`: Original image height before resizing - `original_width`: Original image width before resizing - `mime_type`: Mime type associated to the image - `caption_attribution_description`: This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias. - `wit_features`: Sequence of captions for the image with language, page URL, information about the page, caption text, etc. - `language`: Language code depicting wikipedia language of the page - `page_url`: URL to wikipedia page - `attribution_passes_lang_id`: Compared `language` field with the attribution language (written in the prefix of the attribution description. - `caption_alt_text_description`: This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers - `caption_reference_description`: This is the caption that is visible on the wikipedia page directly below the image. - `caption_title_and_reference_description`: Concatenation of `page_title` and `caption_reference_description`. - `context_page_description`: Corresponds to the short description of the page. It provides a concise explanation of the scope of the page. - `context_section_description`: Text within the image's section - `hierarchical_section_title`: Hierarchical section's title - `is_main_image`: Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers. - `page_changed_recently`: [More Information Needed] - `page_title`: Wikipedia page's title - `section_title`: Section's title <p align='center'> <img width='75%' src='https://production-media.paperswithcode.com/datasets/Screenshot_2021-03-04_at_14.26.02.png' alt="Half Dome" /> </br> <b>Figure: WIT annotation example. </b> </p> Details on the field content can be found directly in the [paper, figure 5 and table 12.](https://arxiv.org/abs/2103.01913) ### Data Splits All data is held in `train` split, with a total of 6477255 examples. ## Dataset Creation ### Curation Rationale From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/): > The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images. > Getting easy access to the image files is crucial for participants to successfully develop competitive models. > With this large release of visual data, we aim to help the competition participants—as well as researchers and practitioners who are interested in working with Wikipedia images—find and download the large number of image files associated with the challenge, in a compact form. ### Source Data #### Initial Data Collection and Normalization From the [paper, section 3.1](https://arxiv.org/abs/2103.01913): > We started with all Wikipedia content pages (i.e., ignoring other pages that have discussions, comments and such). These number about ~124M pages across 279 languages. #### Who are the source language producers? Text was extracted from Wikipedia. ### Annotations #### Annotation process WIT was constructed using an automatic process. However it was human-validated. From the [paper, section 3.7](https://arxiv.org/abs/2103.01913): > To further verify the quality of the WIT dataset we performed a study using (crowd-sourced) human annotators. As seen in Fig. 3, we asked raters to answer 3 questions. Given an image and the page title, raters first evaluate the quality of the attribution description and reference description in the first two questions (order randomized). The third question understands the contextual quality of these text descriptions given the page description and caption. Each response is on a 3-point scale: "Yes" if the text perfectly describes the image, "Maybe" if it is sufficiently explanatory and "No" if it is irrelevant or the image is inappropriate. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/#FN1): > For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the [RetinaFace](https://arxiv.org/abs/1905.00641) detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are [candidate for deletion](https://commons.wikimedia.org/wiki/Commons:Deletion_requests) on Commons from the dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases From the [paper, section 3.4](https://arxiv.org/abs/2103.01913): > Lastly we found that certain image-text pairs occurred very frequently. These were often generic images that did not have much to do with the main article page. Common examples included flags, logos, maps, insignia and such. To prevent biasing the data, we heavily under-sampled all such images ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Miriam Redi, Fabian Kaelin and Tiziano Piccardi. ### Licensing Information [CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/) ### Citation Information ```bibtex @article{srinivasan2021wit, title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning}, author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc}, journal={arXiv preprint arXiv:2103.01913}, year={2021} } ``` ### Contributions Thanks to [@nateraw](https://github.com/nateraw), [yjernite](https://github.com/yjernite) and [mariosasko](https://github.com/mariosasko) for adding this dataset.
pauli31/czech-subjectivity-dataset
2022-07-01T15:31:40.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:cs-CZ", "license:cc-by-nc-sa-4.0", "arxiv:2204.13915", "region:us" ]
pauli31
null
null
null
1
4
--- annotations_creators: [] language_creators: [] language: - cs-CZ license: - cc-by-nc-sa-4.0 multilinguality: - monolingual pretty_name: Czech Subjectivity Dataset size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for Czech Subjectivity Dataset ### Dataset Summary Czech subjectivity dataset (Subj-CS) of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description https://arxiv.org/abs/2204.13915 ### Github https://github.com/pauli31/czech-subjectivity-dataset ### Supported Tasks and Leaderboards Subjectivity Analysis ### Languages Czech ### Data Instances train/dev/test ### Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.](https://creativecommons.org/licenses/by-nc-sa/4.0/) ### Citation Information If you use our dataset or software for academic research, please cite the our [paper](https://arxiv.org/abs/2204.13915) ``` @article{pib2022czech, title={Czech Dataset for Cross-lingual Subjectivity Classification}, author={Pavel Přibáň and Josef Steinberger}, year={2022}, eprint={2204.13915}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contact pribanp@kiv.zcu.cz ### Contributions Thanks to [@pauli31](https://github.com/pauli31) for adding this dataset.
Ukhushn/home-depot
2022-10-25T10:20:53.000Z
[ "task_categories:sentence-similarity", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:afl-3.0", "region:us" ]
Ukhushn
null
null
null
0
4
--- language: - en language_bcp47: - en-US license: - afl-3.0 annotations_creators: - no-annotation language_creators: - found multilinguality: - monolingual pretty_name: Ukhushn/home-depot size_categories: - 10K<n<100K source_datasets: [] task_categories: - sentence-similarity task_ids: [] --- # Dataset Card for Ukhushn/home-depot
filwsyl/video_tags
2022-10-25T10:13:17.000Z
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-nist", "language:enx", "license:mit", "region:us" ]
filwsyl
The MNIST dataset consists of 70,000 28x28 black-and-white images in 10 classes (one for each digits), with 7,000 images per class. There are 60,000 training images and 10,000 test images.
@article{lecun2010mnist, title={MNIST handwritten digit database}, author={LeCun, Yann and Cortes, Corinna and Burges, CJ}, journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist}, volume={2}, year={2010} }
null
0
4
--- annotations_creators: - expert-generated language_creators: - found language: - enx license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-nist task_categories: - image-classification task_ids: - multi-class-image-classification paperswithcode_id: mnist pretty_name: MNIST --- # Dataset Card for MNIST ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://yann.lecun.com/exdb/mnist/ - **Repository:** - **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class. Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets). ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist). ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its label: ``` { 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>, 'label': 5 } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `label`: an integer between 0 and 9 representing the digit. ### Data Splits The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images. ## Dataset Creation ### Curation Rationale The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students. The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set. ### Source Data #### Initial Data Collection and Normalization The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field. #### Who are the source language producers? Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable. ### Annotations #### Annotation process The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them. #### Who are the annotators? Same as the source data creators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Chris Burges, Corinna Cortes and Yann LeCun ### Licensing Information MIT Licence ### Citation Information ``` @article{lecun2010mnist, title={MNIST handwritten digit database}, author={LeCun, Yann and Cortes, Corinna and Burges, CJ}, journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist}, volume={2}, year={2010} } ``` ### Contributions Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset.
domenicrosati/QA2D
2022-10-25T10:13:31.000Z
[ "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:machine-generated", "annotations_creators:crowdsourced", "annotations_creators:found", "language_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categorie...
domenicrosati
null
null
null
1
4
--- annotations_creators: - machine-generated - crowdsourced - found language_creators: - machine-generated - crowdsourced language: [] license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original - extended|squad - extended|race - extended|newsqa - extended|qamr - extended|movieQA task_categories: - text2text-generation task_ids: - text-simplification pretty_name: QA2D --- # Dataset Card for QA2D ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://worksheets.codalab.org/worksheets/0xd4ebc52cebb84130a07cbfe81597aaf0/ - **Repository:** https://github.com/kelvinguu/qanli - **Paper:** https://arxiv.org/abs/1809.02922 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of large-scale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets. This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages en ## Dataset Structure ### Data Instances See below. ### Data Fields - `dataset`: lowercased name of dataset (movieqa, newsqa, qamr, race, squad) - `example_uid`: unique id of example within dataset (there are examples with the same uids from different datasets, so the combination of dataset + example_uid should be used for unique indexing) - `question`: tokenized (space-separated) question from the source QA dataset - `answer`: tokenized (space-separated) answer span from the source QA dataset - `turker_answer`: tokenized (space-separated) answer sentence collected from MTurk - `rule-based`: tokenized (space-separated) answer sentence, generated by the rule-based model ### Data Splits | Dataset Split | Number of Instances in Split | | ------------- |----------------------------- | | Train | 60,710 | | Dev | 10,344 | ## Dataset Creation ### Curation Rationale This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets. ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information @article{DBLP:journals/corr/abs-1809-02922, author = {Dorottya Demszky and Kelvin Guu and Percy Liang}, title = {Transforming Question Answering Datasets Into Natural Language Inference Datasets}, journal = {CoRR}, volume = {abs/1809.02922}, year = {2018}, url = {http://arxiv.org/abs/1809.02922}, eprinttype = {arXiv}, eprint = {1809.02922}, timestamp = {Fri, 05 Oct 2018 11:34:52 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1809-02922.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
Sultannn/id_recipe
2022-09-18T09:24:13.000Z
[ "task_categories:text2text-generation", "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:id", "license:mit", "region...
Sultannn
null
null
null
0
4
--- annotations_creators: - no-annotation language_creators: - found language: - id license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation - text-generation task_ids: - language-modeling paperswithcode_id: null pretty_name: Indonesian Recipe --- # Dataset Card for id_recipe ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Indonesian-recipe](https://github.com/sultanbst123/Hugging-Face-indo) - **Repository:** [Indonesian-recipe](https://github.com/sultanbst123/Hugging-Face-indo) - **Paper:** [N/A] - **Leaderboard:** [N/A] - **Point of Contact:** [Sultan](sultansyach7@gmail.com) ### Dataset Summary Indonesian foods are well-known for their rich taste. There are many spices used even for daily foods. This dataset may give insight on how to prepare Indonesian food. id_recipe is an Indonesian Food Recipe dataset. The dataset contains >10000 Indonesian Recipe. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ### Data Splits Here are the number of examples | name |n.examples| |-----------------|--------: | | train | 14858 | | val | 783 | ### Source Data [here](https://www.kaggle.com/datasets/canggih/indonesian-food-recipes) ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information MIT License ### Citation Information [N/A] ### Contributions Thanks to [@sultan](https://github.com/sultanbst123) for adding this dataset
HuggingFaceM4/ActivitiyNet_Captions
2022-10-23T05:50:46.000Z
[ "task_ids:closed-domain-qa", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10k<n<100K", "source_datasets:original", "language:en", "license:other", "arxiv:1705.00754", "region:us" ]
HuggingFaceM4
null
null
null
1
4
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en license: - other multilinguality: - monolingual pretty_name: ActivityNet Captions size_categories: - 10k<n<100K source_datasets: - original task_categories: - video-captionning task_ids: - closed-domain-qa --- # Dataset Card for ActivityNet Captions ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://cs.stanford.edu/people/ranjaykrishna/densevid/ - **Paper:** https://arxiv.org/abs/1705.00754 ### Dataset Summary The ActivityNet Captions dataset connects videos to a series of temporally annotated sentence descriptions. Each sentence covers an unique segment of the video, describing multiple events that occur. These events may occur over very long or short periods of time and are not limited in any capacity, allowing them to co-occur. On average, each of the 20k videos contains 3.65 temporally localized sentences, resulting in a total of 100k sentences. We find that the number of sentences per video follows a relatively normal distribution. Furthermore, as the video duration increases, the number of sentences also increases. Each sentence has an average length of 13.48 words, which is also normally distributed. You can find more details of the dataset under the ActivityNet Captions Dataset section, and under supplementary materials in the paper. ### Languages The captions in the dataset are in English. ## Dataset Structure ### Data Fields - `video_id` : `str` unique identifier for the video - `video_path`: `str` Path to the video file -`duration`: `float32` Duration of the video - `captions_starts`: `List_float32` List of timestamps denoting the time at which each caption starts - `captions_ends`: `List_float32` List of timestamps denoting the time at which each caption ends - `en_captions`: `list_str` List of english captions describing parts of the video ### Data Splits | |train |validation| test | Overall | |-------------|------:|---------:|------:|------:| |# of videos|10,009 |4,917 |4,885 |19,811 | ### Annotations Quoting [ActivityNet Captions' paper](https://arxiv.org/abs/1705.00754): \ "Each annotation task was divided into two steps: (1) Writing a paragraph describing all major events happening in the videos in a paragraph, with each sentence of the paragraph describing one event, and (2) Labeling the start and end time in the video in which each sentence in the paragraph event occurred." ### Who annotated the dataset? Amazon Mechnical Turk annotators ### Personal and Sensitive Information Nothing specifically mentioned in the paper. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @InProceedings{tgif-cvpr2016, @inproceedings{krishna2017dense, title={Dense-Captioning Events in Videos}, author={Krishna, Ranjay and Hata, Kenji and Ren, Frederic and Fei-Fei, Li and Niebles, Juan Carlos}, booktitle={International Conference on Computer Vision (ICCV)}, year={2017} } ``` ### Contributions Thanks to [@leot13](https://github.com/leot13) for adding this dataset.
bigscience-data/roots_en_the_pile_europarl
2022-12-12T11:02:37.000Z
[ "language:en", "license:mit", "region:us" ]
bigscience-data
null
null
null
0
4
--- language: en license: mit extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_en_the_pile_europarl # the_pile_europarl - Dataset uid: `the_pile_europarl` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 0.1278 % of total - 0.4112 % of fr - 1.5555 % of pt - 0.7511 % of es - 0.1503 % of en ### BigScience processing steps #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: pt - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
bigscience-data/roots_en_ted_talks_iwslt
2022-12-12T11:02:42.000Z
[ "language:en", "license:cc-by-nc-nd-4.0", "region:us" ]
bigscience-data
null
null
null
0
4
--- language: en license: cc-by-nc-nd-4.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_en_ted_talks_iwslt # WIT Ted Talks - Dataset uid: `ted_talks_iwslt` ### Description The Web Inventory Talk is a collection of the original Ted talks and their translated version. The translations are available in more than 109+ languages, though the distribution is not uniform. ### Homepage https://github.com/huggingface/datasets/blob/master/datasets/ted_talks_iwslt/README.md ### Licensing - open license - cc-by-nc-4.0: Creative Commons Attribution Non Commercial 4.0 International TED makes its collection of video recordings and transcripts of talks available under the Creative Commons BY-NC-ND license (look here). WIT3 acknowledges the authorship of TED talks (BY condition) and does not redistribute transcripts for commercial purposes (NC). As regards the integrity of the work (ND), WIT3 only changes the format of the container, while preserving the original contents. WIT3 aims to support research on human language processing as well as the diffusion of TED Talks! ### Speaker Locations - Southern Europe - Italy ### Sizes - 0.0305 % of total - 0.0736 % of ar - 0.2002 % of pt - 0.0128 % of zh - 0.2236 % of vi - 0.0330 % of fr - 0.0545 % of es - 0.0122 % of en - 0.3704 % of id - 0.0373 % of indic-hi - 0.0330 % of indic-ta - 0.1393 % of indic-mr - 0.0305 % of ca - 0.1179 % of indic-ur - 0.0147 % of indic-bn - 0.0240 % of indic-ml - 0.0244 % of indic-te - 0.0503 % of indic-gu - 0.0211 % of indic-kn - 0.0274 % of eu - 0.0023 % of indic-as - 0.0001 % of indic-pa ### BigScience processing steps #### Filters applied to: ar - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: pt - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: vi - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: id - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-hi - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ta - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-mr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: ca - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: indic-ur - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-bn - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ml - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-te - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-gu - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-kn - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: eu - dedup_document - filter_remove_empty_docs #### Filters applied to: indic-as - dedup_document - filter_remove_empty_docs #### Filters applied to: indic-pa - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300
bigscience-data/roots_en_royal_society_corpus
2022-12-12T11:02:48.000Z
[ "language:en", "license:cc-by-nc-sa-4.0", "region:us" ]
bigscience-data
null
null
null
0
4
--- language: en license: cc-by-nc-sa-4.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_en_royal_society_corpus # Royal Society Corpus - Dataset uid: `royal_society_corpus` ### Description The Royal Society Corpus (RSC) 6.0 Open is based on the first centuries of the Philosophical Transactions of the Royal Society of London from its beginning in 1665 to 1920. It includes all publications of the journal written in English and containing running text. The Philosophical Transactions was the first periodical of scientific writing in England. Founded in 1665 by Henry Oldenburg, the first secretary of the Royal Society, it initially contained excerpts of letters of his scientific correspondence, reviews and summaries of recently-published books, and accounts of observations and experiments. ### Homepage https://fedora.clarin-d.uni-saarland.de/rsc_v6/index.html ### Licensing - public domain - cc0-1.0: Creative Commons Zero v1.0 Universal ### Speaker Locations - Northern Europe - United Kingdom ### Sizes - 0.0334 % of total - 0.1808 % of en ### BigScience processing steps #### Filters applied to: en - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_1024
bigscience-data/roots_en_wikibooks
2022-12-12T11:03:03.000Z
[ "language:en", "license:cc-by-sa-3.0", "region:us" ]
bigscience-data
null
null
null
0
4
--- language: en license: cc-by-sa-3.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_en_wikibooks # wikibooks_filtered - Dataset uid: `wikibooks_filtered` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 0.0897 % of total - 0.2591 % of en - 0.0965 % of fr - 0.1691 % of es - 0.2834 % of indic-hi - 0.2172 % of pt - 0.0149 % of zh - 0.0279 % of ar - 0.1374 % of vi - 0.5025 % of id - 0.3694 % of indic-ur - 0.5744 % of eu - 0.0769 % of ca - 0.0519 % of indic-ta - 0.1470 % of indic-mr - 0.0751 % of indic-te - 0.0156 % of indic-bn - 0.0476 % of indic-ml - 0.0087 % of indic-pa ### BigScience processing steps #### Filters applied to: en - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_en - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: fr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_fr - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: es - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_es - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: indic-hi - dedup_document - filter_remove_empty_docs - split_sentences_indic-hi - dedup_template_soft - filter_small_docs_bytes_300 #### Filters applied to: pt - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_pt - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: zh - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_zhs - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: ar - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_ar - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: vi - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_vi - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: id - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_id - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-ur - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: eu - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_eu - dedup_template_soft - replace_newline_with_space #### Filters applied to: ca - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_ca - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: indic-ta - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-ta - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-mr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-mr - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-te - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-te - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-bn - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-bn - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-ml - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-ml - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-pa - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-pa - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300
statworx/haiku
2022-07-02T13:25:45.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "region:us" ]
statworx
null
null
null
1
4
--- annotations_creators: [] language_creators: [] language: - en license: [] multilinguality: - monolingual pretty_name: Haiku size_categories: - 10K<n<100K source_datasets: [] task_categories: - text-generation task_ids: - language-modeling --- # Dataset Card for Haiku Data
HuggingFaceM4/yttemporal180m
2022-05-24T12:25:22.000Z
[ "license:other", "region:us" ]
HuggingFaceM4
YT-Temporal-180M, a large and diverse dataset of 6 million videos (spanning 180M extracted frames) that covers diverse topics.
@inproceedings{zellersluhessel2021merlot, title={MERLOT: Multimodal Neural Script Knowledge Models}, author={Zellers, Rowan and Lu, Ximing and Hessel, Jack and Yu, Youngjae and Park, Jae Sung and Cao, Jize and Farhadi, Ali and Choi, Yejin}, booktitle={Advances in Neural Information Processing Systems 34}, year={2021} }
null
2
4
--- license: other ---
Rexhaif/ru-med-ner
2022-05-25T20:58:27.000Z
[ "arxiv:2201.06499", "region:us" ]
Rexhaif
null
null
null
1
4
# Dataset Card for ru-med-ner ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Additional Information](#additional-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/pavel-blinov/RuMedBench - **Repository:** https://github.com/pavel-blinov/RuMedBench - **Paper:** https://arxiv.org/abs/2201.06499 - **Leaderboard:** https://github.com/pavel-blinov/RuMedBench - **Point of Contact:** Blinov.P.D@sberbank.ru ### Dataset Summary NER dataset for Russian language, extracted from medical records\\ See https://github.com/pavel-blinov/RuMedBench for details ### Supported Tasks and Leaderboards [Needs More Information] ### Languages - ru-RU ## Dataset Structure ### Data Instances ```javascript {"idx": "2472239.tsv_0", "tokens": ["", "?5@2K9", "65", "45=L", "?@8<5=5=8O", "2K?8;0", "5", "B01;5B>:", ",", "?@>A=C;0AL", "=>GLN", "8", "A>=", ":0:", ">B18;>", "."], "ner_tags": ["O", "O", "O", "O", "O", "O", "O", "B-Drugform", "O", "B-ADR", "O", "O", "B-ADR", "I-ADR", "I-ADR", "O"]} ``` ### Data Fields - idx: example id - tokens: list of words from example - ner_tags: ner tags ### Citation Information ``` @misc{blinov2022rumedbench, title={RuMedBench: A Russian Medical Language Understanding Benchmark}, author={Pavel Blinov and Arina Reshetnikova and Aleksandr Nesterov and Galina Zubkova and Vladimir Kokh}, year={2022}, eprint={2201.06499}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
arize-ai/movie_reviews_with_context_drift
2022-07-01T17:26:12.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|imdb", "language:en", "license:mit", "region:us" ]
arize-ai
null
null
null
0
4
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - mit multilinguality: - monolingual pretty_name: sentiment-classification-reviews-with-drift size_categories: - 10K<n<100K source_datasets: - extended|imdb task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances #### default An example of `training` looks as follows: ```json { 'prediction_ts': 1650092416.0, 'age': 44, 'gender': 'female', 'context': 'movies', 'text': "An interesting premise, and Billy Drago is always good as a dangerous nut-bag (side note: I'd love to see Drago, Stephen McHattie and Lance Hendrikson in a flick together; talk about raging cheekbones!). The soundtrack wasn't terrible, either.<br /><br />But the acting--even that of such professionals as Drago and Debbie Rochon--was terrible, the directing worse (perhaps contributory to the former), the dialog chimp-like, and the camera work, barely tolerable. Still, it was the SETS that got a big 10 on my oy-vey scale. I don't know where this was filmed, but were I to hazard a guess, it would be either an open-air museum, or one of those re-enactment villages, where everything is just a bit too well-kept to do more than suggest the real Old West. Okay, so it was shot on a college kid's budget. That said, I could have forgiven one or two of the aforementioned faults. But taken all together, and being generous, I could not see giving it more than three stars.", 'label': 0 } ``` ### Data Fields #### default The data fields are the same among all splits. An example of `training` looks as follows: - `prediction_ts`: a `float` feature. - `age`: an `int` feature. - `gender`: a `string` feature. - `context`: a `string` feature. - `text`: a `string` feature. - `label`: a `ClassLabel` feature, with possible values including negative(0) and positive(1). ### Data Splits | name |training|validation|production | |----------|-------:|---------:|----------:| | default | 9916 | 2479 | 40079 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
laion/laion1B-nolang-aesthetic-tags
2022-05-22T02:09:56.000Z
[ "region:us" ]
laion
null
null
null
1
4
Entry not found
pinecone/yt-transcriptions
2022-05-26T14:47:06.000Z
[ "region:us" ]
pinecone
null
null
null
1
4
Entry not found
mbazaNLP/kinyarwanda-tts-dataset
2023-06-27T08:09:28.000Z
[ "language_creators:Digital Umuganda", "size_categories:3K<n<4K", "size_categories:~6hours", "language:rw", "license:cc-by-4.0", "region:us" ]
mbazaNLP
null
null
null
1
4
--- language: - rw language_creators: - "Digital Umuganda" license: - cc-by-4.0 size_categories: - 3K<n<4K - ~6hours --- # Kinyarwanda TTS dataset The dataset consists of 3992 clips of Kinyarwanda TTS corpus recorded in a studio using a voice actress, it was collected in the mbaza project ## Data structure ``` Audio: 3992 Single voice studio recordings by a voice actress Text: CSV with audio name and corresponding written text ``` ## Language The corresponding dataset is in the Kinyarwanda Language ## Dataset Creation - Text collected had to include Kinyarwanda syllabes, which is made by a combination of a consonant or a group of consonats (e.g. Nyw) and a vowel. - Text were reviewed by a linguist to ensure the text fit kinyarwanda standards - The voice were recorded in a studio albeit in a semi-professional settings (i.e. some of the audio contains reverbs)