id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
med_hop
2022-11-03T16:16:32.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "multi-hop", "arxiv:1710.06481"...
null
MedHop is based on research paper abstracts from PubMed, and the queries are about interactions between pairs of drugs. The correct answer has to be inferred by combining information from a chain of reactions of drugs and proteins.
@misc{welbl2018constructing, title={Constructing Datasets for Multi-hop Reading Comprehension Across Documents}, author={Johannes Welbl and Pontus Stenetorp and Sebastian Riedel}, year={2018}, eprint={1710.06481}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
2
3
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: medhop pretty_name: MedHop tags: - multi-hop dataset_info: - config_name: original features: - name: id dtype: string - name: query dtype: string - name: answer dtype: string - name: candidates sequence: string - name: supports sequence: string splits: - name: train num_bytes: 93937322 num_examples: 1620 - name: validation num_bytes: 16461640 num_examples: 342 download_size: 339843061 dataset_size: 110398962 - config_name: masked features: - name: id dtype: string - name: question dtype: string - name: answer dtype: string - name: candidates sequence: string - name: supports sequence: string splits: - name: train num_bytes: 95813584 num_examples: 1620 - name: validation num_bytes: 16800570 num_examples: 342 download_size: 339843061 dataset_size: 112614154 --- # Dataset Card for MedHop ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [QAngaroo](http://qangaroo.cs.ucl.ac.uk/) - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]() - **Paper:** [Constructing Datasets for Multi-hop Reading Comprehension Across Documents](https://arxiv.org/abs/1710.06481) - **Leaderboard:** [leaderboard](http://qangaroo.cs.ucl.ac.uk/leaderboard.html) - **Point of Contact:** [Johannes Welbl](j.welbl@cs.ucl.ac.uk) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
multi_para_crawl
2022-11-03T16:31:38.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:bg", "language:ca", "language:cs", "language:da", "language:de", "language:el", "language:es", "languag...
null
Parallel corpora from Web Crawls collected in the ParaCrawl project and further processed for making it a multi-parallel corpus by pivoting via English. Here we only provide the additional language pairs that came out of pivoting. The bitexts for English are available from the ParaCrawl release. 40 languages, 669 bitexts total number of files: 40 total number of tokens: 10.14G total number of sentence fragments: 505.48M Please, acknowledge the ParaCrawl project at http://paracrawl.eu. This version is derived from the original release at their website adjusted for redistribution via the OPUS corpus collection. Please, acknowledge OPUS as well for this service.
@InProceedings{TIEDEMANN12.463, author = {J�rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} }
null
0
3
--- annotations_creators: - found language_creators: - found language: - bg - ca - cs - da - de - el - es - et - eu - fi - fr - ga - gl - ha - hr - hu - ig - is - it - km - lt - lv - mt - my - nb - ne - nl - nn - pl - ps - pt - ro - ru - si - sk - sl - so - sv - sw - tl license: - cc0-1.0 multilinguality: - multilingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: MultiParaCrawl dataset_info: - config_name: cs-is features: - name: id dtype: string - name: translation dtype: translation: languages: - cs - is splits: - name: train num_bytes: 148967967 num_examples: 691006 download_size: 61609317 dataset_size: 148967967 - config_name: ga-sk features: - name: id dtype: string - name: translation dtype: translation: languages: - ga - sk splits: - name: train num_bytes: 92802332 num_examples: 390327 download_size: 39574554 dataset_size: 92802332 - config_name: lv-mt features: - name: id dtype: string - name: translation dtype: translation: languages: - lv - mt splits: - name: train num_bytes: 116533998 num_examples: 464160 download_size: 49770574 dataset_size: 116533998 - config_name: nb-ru features: - name: id dtype: string - name: translation dtype: translation: languages: - nb - ru splits: - name: train num_bytes: 116899303 num_examples: 399050 download_size: 40932849 dataset_size: 116899303 - config_name: de-tl features: - name: id dtype: string - name: translation dtype: translation: languages: - de - tl splits: - name: train num_bytes: 30880849 num_examples: 98156 download_size: 12116471 dataset_size: 30880849 --- # Dataset Card for MultiParaCrawl ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/MultiParaCrawl.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/MultiParaCrawl.php E.g. `dataset = load_dataset("multi_para_crawl", lang1="en", lang2="nl")` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
mutual_friends
2022-11-18T21:31:53.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv:1...
null
Our goal is to build systems that collaborate with people by exchanging information through natural language and reasoning over structured knowledge base. In the MutualFriend task, two agents, A and B, each have a private knowledge base, which contains a list of friends with multiple attributes (e.g., name, school, major, etc.). The agents must chat with each other to find their unique mutual friend.
@inproceedings{he-etal-2017-learning, title = "Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings", author = "He, He and Balakrishnan, Anusha and Eric, Mihail and Liang, Percy", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-1162", doi = "10.18653/v1/P17-1162", pages = "1766--1776", abstract = "We study a \textit{symmetric collaborative dialogue} setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models.", }
null
2
3
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - dialogue-modeling paperswithcode_id: mutualfriends pretty_name: MutualFriends dataset_info: features: - name: uuid dtype: string - name: scenario_uuid dtype: string - name: scenario_alphas sequence: float32 - name: scenario_attributes sequence: - name: unique dtype: bool_ - name: value_type dtype: string - name: name dtype: string - name: scenario_kbs sequence: sequence: sequence: sequence: string - name: agents struct: - name: '1' dtype: string - name: '0' dtype: string - name: outcome_reward dtype: int32 - name: events struct: - name: actions sequence: string - name: start_times sequence: float32 - name: data_messages sequence: string - name: data_selects sequence: - name: attributes sequence: string - name: values sequence: string - name: agents sequence: int32 - name: times sequence: float32 config_name: plain_text splits: - name: train num_bytes: 26979472 num_examples: 8967 - name: test num_bytes: 3327158 num_examples: 1107 - name: validation num_bytes: 3267881 num_examples: 1083 download_size: 41274578 dataset_size: 33574511 --- # Dataset Card for MutualFriends ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [COCOA](https://stanfordnlp.github.io/cocoa/) - **Repository:** [Github repository](https://github.com/stanfordnlp/cocoa) - **Paper:** [Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings (ACL 2017)](https://arxiv.org/abs/1704.07130) - **Codalab**: [Codalab](https://worksheets.codalab.org/worksheets/0xc757f29f5c794e5eb7bfa8ca9c945573/) ### Dataset Summary Our goal is to build systems that collaborate with people by exchanging information through natural language and reasoning over structured knowledge base. In the MutualFriend task, two agents, A and B, each have a private knowledge base, which contains a list of friends with multiple attributes (e.g., name, school, major, etc.). The agents must chat with each other to find their unique mutual friend. ### Supported Tasks and Leaderboards We consider two agents, each with a private knowledge base of items, who must communicate their knowledge to achieve a common goal. Specifically, we designed the MutualFriends task (see the figure below). Each agent has a list of friends with attributes like school, major etc. They must chat with each other to find the unique mutual friend. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances An example looks like this. ``` { 'uuid': 'C_423324a5fff045d78bef75a6f295a3f4' 'scenario_uuid': 'S_hvmRM4YNJd55ecT5', 'scenario_alphas': [0.30000001192092896, 1.0, 1.0], 'scenario_attributes': { 'name': ['School', 'Company', 'Location Preference'], 'unique': [False, False, False], 'value_type': ['school', 'company', 'loc_pref'] }, 'scenario_kbs': [ [ [['School', 'Company', 'Location Preference'], ['Longwood College', 'Alton Steel', 'indoor']], [['School', 'Company', 'Location Preference'], ['Salisbury State University', 'Leonard Green & Partners', 'indoor']], [['School', 'Company', 'Location Preference'], ['New Mexico Highlands University', 'Crazy Eddie', 'indoor']], [['School', 'Company', 'Location Preference'], ['Rhodes College', "Tully's Coffee", 'indoor']], [['School', 'Company', 'Location Preference'], ['Sacred Heart University', 'AMR Corporation', 'indoor']], [['School', 'Company', 'Location Preference'], ['Salisbury State University', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['New Mexico Highlands University', 'The Hartford Financial Services Group', 'indoor']], [['School', 'Company', 'Location Preference'], ['Sacred Heart University', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['Babson College', 'The Hartford Financial Services Group', 'indoor']] ], [ [['School', 'Company', 'Location Preference'], ['National Technological University', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['Fairmont State College', 'Leonard Green & Partners', 'outdoor']], [['School', 'Company', 'Location Preference'], ['Johnson C. Smith University', 'Data Resources Inc.', 'outdoor']], [['School', 'Company', 'Location Preference'], ['Salisbury State University', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['Fairmont State College', 'Molycorp', 'outdoor']], [['School', 'Company', 'Location Preference'], ['University of South Carolina - Aiken', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['University of South Carolina - Aiken', 'STX', 'outdoor']], [['School', 'Company', 'Location Preference'], ['National Technological University', 'STX', 'outdoor']], [['School', 'Company', 'Location Preference'], ['Johnson C. Smith University', 'Rockstar Games', 'indoor']] ] ], 'agents': { '0': 'human', '1': 'human' }, 'outcome_reward': 1, 'events': { 'actions': ['message', 'message', 'message', 'message', 'select', 'select'], 'agents': [1, 1, 0, 0, 1, 0], 'data_messages': ['Hello', 'Do you know anyone who works at Molycorp?', 'Hi. All of my friends like the indoors.', 'Ihave two friends that work at Molycorp. They went to Salisbury and Sacred Heart.', '', ''], 'data_selects': { 'attributes': [ [], [], [], [], ['School', 'Company', 'Location Preference'], ['School', 'Company', 'Location Preference'] ], 'values': [ [], [], [], [], ['Salisbury State University', 'Molycorp', 'indoor'], ['Salisbury State University', 'Molycorp', 'indoor'] ] }, 'start_times': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0], 'times': [1480737280.0, 1480737280.0, 1480737280.0, 1480737280.0, 1480737280.0, 1480737280.0] }, } ``` ### Data Fields - `uuid`: example id. - `scenario_uuid`: scenario id. - `scenario_alphas`: scenario alphas. - `scenario_attributes`: all the attributes considered in the scenario. The dictionaries are liniearized: to reconstruct the dictionary of attribute i-th, one should extract the i-th elements of `unique`, `value_type` and `name`. - `unique`: bool. - `value_type`: code/type of the attribute. - `name`: name of the attribute. - `scenario_kbs`: descriptions of the persons present in the two users' databases. List of two (one for each user in the dialogue). `scenario_kbs[i]` is a list of persons. Each person is represented as two lists (one for attribute names and the other for attribute values). The j-th element of attribute names corresponds to the j-th element of attribute values (linearized dictionary). - `agents`: the two users engaged in the dialogue. - `outcome_reward`: reward of the present dialogue. - `events`: dictionary describing the dialogue. The j-th element of each sub-element of the dictionary describes the turn along the axis of the sub-element. - `actions`: type of turn (either `message` or `select`). - `agents`: who is talking? Agent 1 or 0? - `data_messages`: the string exchanged if `action==message`. Otherwise, empty string. - `data_selects`: selection of the user if `action==select`. Otherwise, empty selection/dictionary. - `start_times`: always -1 in these data. - `times`: sending time. ### Data Splits There are 8967 dialogues for training, 1083 for validation and 1107 for testing. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{he-etal-2017-learning, title = "Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings", author = "He, He and Balakrishnan, Anusha and Eric, Mihail and Liang, Percy", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-1162", doi = "10.18653/v1/P17-1162", pages = "1766--1776", abstract = "We study a \textit{symmetric collaborative dialogue} setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models.", } ``` ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
narrativeqa_manual
2022-11-18T21:32:14.000Z
[ "task_categories:text2text-generation", "task_ids:abstractive-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:1712.07040", "region:us" ]
null
The Narrative QA Manual dataset is a reading comprehension dataset, in which the reader must answer questions about stories by reading entire books or movie scripts. The QA tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience.\THIS DATASET REQUIRES A MANUALLY DOWNLOADED FILE! Because of a script in the original repository which downloads the stories from original URLs everytime, The links are sometimes broken or invalid. Therefore, you need to manually download the stories for this dataset using the script provided by the authors (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named "tmp" in the root directory and downloads the stories there. This folder containing the storiescan be used to load the dataset via `datasets.load_dataset("narrativeqa_manual", data_dir="<path/to/folder>")`.
@article{kovcisky2018narrativeqa, title={The narrativeqa reading comprehension challenge}, author={Ko{\v{c}}isk{\'y}, Tom{\'a}{\v{s}} and Schwarz, Jonathan and Blunsom, Phil and Dyer, Chris and Hermann, Karl Moritz and Melis, G{\'a}bor and Grefenstette, Edward}, journal={Transactions of the Association for Computational Linguistics}, volume={6}, pages={317--328}, year={2018}, publisher={MIT Press} }
null
0
3
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation task_ids: - abstractive-qa paperswithcode_id: narrativeqa pretty_name: NarrativeQA dataset_info: features: - name: document struct: - name: id dtype: string - name: kind dtype: string - name: url dtype: string - name: file_size dtype: int32 - name: word_count dtype: int32 - name: start dtype: string - name: end dtype: string - name: summary struct: - name: text dtype: string - name: tokens sequence: string - name: url dtype: string - name: title dtype: string - name: text dtype: string - name: question struct: - name: text dtype: string - name: tokens sequence: string - name: answers list: - name: text dtype: string - name: tokens sequence: string splits: - name: train num_bytes: 9115940054 num_examples: 32747 - name: test num_bytes: 2911702563 num_examples: 10557 - name: validation num_bytes: 968994186 num_examples: 3461 download_size: 22638273 dataset_size: 12996636803 --- # Dataset Card for Narrative QA Manual ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [NarrativeQA Homepage](https://deepmind.com/research/open-source/narrativeqa) - **Repository:** [NarrativeQA Repo](https://github.com/deepmind/narrativeqa) - **Paper:** [The NarrativeQA Reading Comprehension Challenge](https://arxiv.org/pdf/1712.07040.pdf) - **Leaderboard:** - **Point of Contact:** [Tomáš Kočiský](mailto:tkocisky@google.com) [Jonathan Schwarz](mailto:schwarzjn@google.com) [Phil Blunsom](pblunsom@google.com) [Chris Dyer](cdyer@google.com) [Karl Moritz Hermann](mailto:kmh@google.com) [Gábor Melis](mailto:melisgl@google.com) [Edward Grefenstette](mailto:etg@google.com) ### Dataset Summary NarrativeQA Manual is an English-language dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents. THIS DATASET REQUIRES A MANUALLY DOWNLOADED FILE! Because of a script in the original repository which downloads the stories from original URLs everytime, the links are sometimes broken or invalid. Therefore, you need to manually download the stories for this dataset using the script provided by the authors (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named "tmp" in the root directory and downloads the stories there. This folder containing the stories can be used to load the dataset via `datasets.load_dataset("narrativeqa_manual", data_dir="<path/to/folder>")`. ### Supported Tasks and Leaderboards The dataset is used to test reading comprehension. There are 2 tasks proposed in the paper: "summaries only" and "stories only", depending on whether the human-generated summary or the full story text is used to answer the question. ### Languages English ## Dataset Structure ### Data Instances A typical data point consists of a question and answer pair along with a summary/story which can be used to answer the question. Additional information such as the url, word count, wikipedia page, are also provided. A typical example looks like this: ``` { "document": { "id": "23jncj2n3534563110", "kind": "movie", "url": "https://www.imsdb.com/Movie%20Scripts/Name%20of%20Movie.html", "file_size": 80473, "word_count": 41000, "start": "MOVIE screenplay by", "end": ". THE END", "summary": { "text": "Joe Bloggs begins his journey exploring...", "tokens": ["Joe", "Bloggs", "begins", "his", "journey", "exploring",...], "url": "http://en.wikipedia.org/wiki/Name_of_Movie", "title": "Name of Movie (film)" }, "text": "MOVIE screenplay by John Doe\nSCENE 1..." }, "question": { "text": "Where does Joe Bloggs live?", "tokens": ["Where", "does", "Joe", "Bloggs", "live", "?"], }, "answers": [ {"text": "At home", "tokens": ["At", "home"]}, {"text": "His house", "tokens": ["His", "house"]} ] } ``` ### Data Fields - `document.id` - Unique ID for the story. - `document.kind` - "movie" or "gutenberg" depending on the source of the story. - `document.url` - The URL where the story was downloaded from. - `document.file_size` - File size (in bytes) of the story. - `document.word_count` - Number of tokens in the story. - `document.start` - First 3 tokens of the story. Used for verifying the story hasn't been modified. - `document.end` - Last 3 tokens of the story. Used for verifying the story hasn't been modified. - `document.summary.text` - Text of the wikipedia summary of the story. - `document.summary.tokens` - Tokenized version of `document.summary.text`. - `document.summary.url` - Wikipedia URL of the summary. - `document.summary.title` - Wikipedia Title of the summary. - `question` - `{"text":"...", "tokens":[...]}` for the question about the story. - `answers` - List of `{"text":"...", "tokens":[...]}` for valid answers for the question. ### Data Splits The data is split into training, valiudation, and test sets based on story (i.e. the same story cannot appear in more than one split): | Train | Valid | Test | | ------ | ----- | ----- | | 32747 | 3461 | 10557 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Stories and movies scripts were downloaded from [Project Gutenburg](https://www.gutenberg.org) and a range of movie script repositories (mainly [imsdb](http://www.imsdb.com)). #### Who are the source language producers? The language producers are authors of the stories and scripts as well as Amazon Turk workers for the questions. ### Annotations #### Annotation process Amazon Turk Workers were provided with human written summaries of the stories (To make the annotation tractable and to lead annotators towards asking non-localized questions). Stories were matched with plot summaries from Wikipedia using titles and verified the matching with help from human annotators. The annotators were asked to determine if both the story and the summary refer to a movie or a book (as some books are made into movies), or if they are the same part in a series produced in the same year. Annotators on Amazon Mechanical Turk were instructed to write 10 question–answer pairs each based solely on a given summary. Annotators were instructed to imagine that they are writing questions to test students who have read the full stories but not the summaries. We required questions that are specific enough, given the length and complexity of the narratives, and to provide adiverse set of questions about characters, events, why this happened, and so on. Annotators were encouraged to use their own words and we prevented them from copying. We asked for answers that are grammatical, complete sentences, and explicitly allowed short answers (one word, or a few-word phrase, or ashort sentence) as we think that answering with a full sentence is frequently perceived as artificial when asking about factual information. Annotators were asked to avoid extra, unnecessary information in the question or the answer, and to avoid yes/no questions or questions about the author or the actors. #### Who are the annotators? Amazon Mechanical Turk workers. ### Personal and Sensitive Information None ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is released under a [Apache-2.0 License](https://github.com/deepmind/narrativeqa/blob/master/LICENSE). ### Citation Information ``` @article{narrativeqa, author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and Chris Dyer and Karl Moritz Hermann and G\'abor Melis and Edward Grefenstette}, title = {The {NarrativeQA} Reading Comprehension Challenge}, journal = {Transactions of the Association for Computational Linguistics}, url = {https://TBD}, volume = {TBD}, year = {2018}, pages = {TBD}, } ``` ### Contributions Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset.
nkjp-ner
2023-01-25T14:41:28.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:gpl-3.0", "region:us" ]
null
The NKJP-NER is based on a human-annotated part of National Corpus of Polish (NKJP). We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.
@book{przepiorkowski2012narodowy, title={Narodowy korpus jezyka polskiego}, author={Przepi{\'o}rkowski, Adam}, year={2012}, publisher={Naukowe PWN} }
null
1
3
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - gpl-3.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: NJKP NER dataset_info: features: - name: sentence dtype: string - name: target dtype: class_label: names: '0': geogName '1': noEntity '2': orgName '3': persName '4': placeName '5': time splits: - name: train num_bytes: 1612125 num_examples: 15794 - name: test num_bytes: 221092 num_examples: 2058 - name: validation num_bytes: 196652 num_examples: 1941 download_size: 821629 dataset_size: 2029869 --- # Dataset Card for NJKP NER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nkjp.pl/index.php?page=0&lang=1 - **Repository:** - **Paper:** @book{przepiorkowski2012narodowy, title={Narodowy korpus j{\k{e}}zyka polskiego}, author={Przepi{\'o}rkowski, Adam}, year={2012}, publisher={Naukowe PWN} - **Leaderboard:** - **Point of Contact:** adamp@ipipan.waw.pl ### Dataset Summary A linguistic corpus is a collection of texts where one can find the typical use of a single word or a phrase, as well as their meaning and grammatical function. Nowadays, without access to a language corpus, it has become impossible to do linguistic research, to write dictionaries, grammars and language teaching books, to create search engines sensitive to Polish inflection, machine translation engines and software of advanced language technology. Language corpora have become an essential tool for linguists, but they are also helpful for software engineers, scholars of literature and culture, historians, librarians and other specialists of art and computer sciences. The manually annotated 1-million word subcorpus of the NJKP, available on GNU GPL v.3 ### Supported Tasks and Leaderboards Named entity recognition [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances Two tsv files (train, dev) with two columns (sentence, target) and one (test) with just one (sentence). ### Data Fields - sentence - target ### Data Splits Data is splitted in train/dev/test split. ## Dataset Creation ### Curation Rationale This dataset is one of nine evaluation tasks to improve polish language processing. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information GNU GPL v.3 ### Citation Information @book{przepiorkowski2012narodowy, title={Narodowy korpus j{\k{e}}zyka polskiego}, author={Przepi{\'o}rkowski, Adam}, year={2012}, publisher={Naukowe PWN} } ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
ofis_publik
2022-11-03T16:15:15.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:br", "language:fr", "license:unknown", "region:us" ]
null
Texts from the Ofis Publik ar Brezhoneg (Breton Language Board) provided by Francis Tyers 2 languages, total number of files: 278 total number of tokens: 2.12M total number of sentence fragments: 0.13M
@InProceedings{TIEDEMANN12.463, author = {J{\"o}rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} } @inproceedings{tyers-2009-rule, title = "Rule-Based Augmentation of Training Data in {B}reton-{F}rench Statistical Machine Translation", author = "Tyers, Francis M.", booktitle = "Proceedings of the 13th Annual conference of the European Association for Machine Translation", month = may # " 14{--}15", year = "2009", address = "Barcelona, Spain", publisher = "European Association for Machine Translation", url = "https://www.aclweb.org/anthology/2009.eamt-1.29", }
null
0
3
--- annotations_creators: - found language_creators: - found language: - br - fr license: - unknown multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OfisPublik dataset_info: features: - name: id dtype: string - name: translation dtype: translation: languages: - br - fr config_name: br-fr splits: - name: train num_bytes: 12256825 num_examples: 63422 download_size: 3856983 dataset_size: 12256825 --- # Dataset Card for OfisPublik ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/OfisPublik.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
opus_fiskmo
2022-11-03T16:08:01.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:fi", "language:sv", "license:unknown", "region:us" ]
null
fiskmo, a massive parallel corpus for Finnish and Swedish.
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
null
0
3
--- annotations_creators: - found language_creators: - found language: - fi - sv license: - unknown multilinguality: - translation size_categories: - 1M<n<10M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusFiskmo dataset_info: features: - name: translation dtype: translation: languages: - fi - sv config_name: fi-sv splits: - name: train num_bytes: 326528834 num_examples: 2100001 download_size: 144858927 dataset_size: 326528834 --- # Dataset Card for [opus_fiskmo] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[fiskmo](http://opus.nlpl.eu/fiskmo.php) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary fiskmo, a massive parallel corpus for Finnish and Swedish. ### Supported Tasks and Leaderboards The underlying task is machine translation for language pair Finnish and Swedish. ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) ### Contributions Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
opus_memat
2022-11-03T16:08:11.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:xh", "license:unknown", "region:us" ]
null
Xhosa-English parallel corpora, funded by EPSRC, the Medical Machine Translation project worked on machine translation between ixiXhosa and English, with a focus on the medical domain.
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
null
1
3
--- annotations_creators: - found language_creators: - found language: - en - xh license: - unknown multilinguality: - translation size_categories: - 100K<n<1M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusMemat dataset_info: features: - name: translation dtype: translation: languages: - xh - en config_name: xh-en splits: - name: train num_bytes: 25400570 num_examples: 154764 download_size: 8382865 dataset_size: 25400570 --- # Dataset Card for [opus_memat] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[memat](http://opus.nlpl.eu/memat.php) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Xhosa-English parallel corpora, funded by EPSRC, the Medical Machine Translation project worked on machine translation between ixiXhosa and English, with a focus on the medical domain. ### Supported Tasks and Leaderboards The underlying task is machine translation from Xhosa to English ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) ### Contributions Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
opus_montenegrinsubs
2022-11-03T16:08:11.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:cnr", "language:en", "license:unknown", "region:us" ]
null
Opus MontenegrinSubs dataset for machine translation task, for language pair en-me: english and montenegrin
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
null
0
3
--- annotations_creators: - found language_creators: - found language: - cnr - en license: - unknown multilinguality: - translation size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusMontenegrinsubs dataset_info: features: - name: translation dtype: translation: languages: - en - me config_name: en-me splits: - name: train num_bytes: 4896403 num_examples: 65043 download_size: 1990570 dataset_size: 4896403 --- # Dataset Card for [opus_montenegrinsubs] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[opus MontenegrinSubs ](http://opus.nlpl.eu/MontenegrinSubs.php) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Opus MontenegrinSubs dataset for machine translation task, for language pair en-me: english and montenegrin ### Supported Tasks and Leaderboards The underlying task is machine translation from en to me ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) ### Contributions Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
opus_tedtalks
2022-11-03T16:15:24.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:hr", "license:unknown", "region:us" ]
null
This is a Croatian-English parallel corpus of transcribed and translated TED talks, originally extracted from https://wit3.fbk.eu. The corpus is compiled by Željko Agić and is taken from http://lt.ffzg.hr/zagic provided under the CC-BY-NC-SA license. 2 languages, total number of files: 2 total number of tokens: 2.81M total number of sentence fragments: 0.17M
@InProceedings{TIEDEMANN12.463, author = {J{\"o}rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} }
null
0
3
--- annotations_creators: - found language_creators: - found language: - en - hr license: - unknown multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusTedtalks dataset_info: features: - name: id dtype: string - name: translation dtype: translation: languages: - en - hr config_name: en-hr splits: - name: train num_bytes: 15249417 num_examples: 86348 download_size: 5639306 dataset_size: 15249417 --- # Dataset Card for OpusTedtalks ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/TedTalks.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary This is a Croatian-English parallel corpus of transcribed and translated TED talks, originally extracted from https://wit3.fbk.eu. The corpus is compiled by Željko Agić and is taken from http://lt.ffzg.hr/zagic provided under the CC-BY-NC-SA license. This corpus is sentence aligned for both language pairs. The documents were collected and aligned using the Hunalign algorithm. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [CC-BY-NC-SA license]<http://creativecommons.org/licenses/by-sa/3.0/> ### Citation Information @InProceedings{TIEDEMANN12.463, author = {J{\"o}rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} } ### Contributions Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
psc
2023-01-25T14:42:57.000Z
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-3.0", "region:us" ]
null
The Polish Summaries Corpus contains news articles and their summaries. We used summaries of the same article as positive pairs and sampled the most similar summaries of different articles as negatives.
@inproceedings{ogro:kop:14:lrec, title={The {P}olish {S}ummaries {C}orpus}, author={Ogrodniczuk, Maciej and Kope{\'c}, Mateusz}, booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014", year = "2014", }
null
1
3
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - summarization task_ids: - news-articles-summarization pretty_name: psc dataset_info: features: - name: extract_text dtype: string - name: summary_text dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' splits: - name: train num_bytes: 5026582 num_examples: 4302 - name: test num_bytes: 1292103 num_examples: 1078 download_size: 2357808 dataset_size: 6318685 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://zil.ipipan.waw.pl/PolishSummariesCorpus - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Polish Summaries Corpus contains news articles and their summaries. We used summaries of the same article as positive pairs and sampled the most similar summaries of different articles as negatives. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - extract_text: text to summarise - summary_text: summary of extracted text - label: 1 indicates summary is similar, 0 means that it is not similar ### Data Splits Data is splitted in train and test dataset. Test dataset doesn't have label column, so -1 is set instead. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-SA 3.0 ### Citation Information @inproceedings{ogro:kop:14:lrec, title={The {P}olish {S}ummaries {C}orpus}, author={Ogrodniczuk, Maciej and Kope{\'c}, Mateusz}, booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014", year = "2014", } ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
ro_sts_parallel
2022-11-18T21:42:26.000Z
[ "task_categories:translation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:extended|other-sts-b", "language:en", "language:ro", "license:cc-by-4.0", "region:us" ]
null
The RO-STS-Parallel (a Parallel Romanian English dataset - translation of the Semantic Textual Similarity) contains 17256 sentences in Romanian and English. It is a high-quality translation of the English STS benchmark dataset into Romanian.
@inproceedings{dumitrescu2021liro, title={Liro: Benchmark and leaderboard for romanian language tasks}, author={Dumitrescu, Stefan Daniel and Rebeja, Petru and Lorincz, Beata and Gaman, Mihaela and Avram, Andrei and Ilie, Mihai and Pruteanu, Andrei and Stan, Adriana and Rosia, Lorena and Iacobescu, Cristina and others}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)}, year={2021} }
null
0
3
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en - ro license: - cc-by-4.0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - extended|other-sts-b task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: RO-STS-Parallel dataset_info: - config_name: ro_sts_parallel features: - name: translation dtype: translation: languages: - ro - en splits: - name: train num_bytes: 1563909 num_examples: 11499 - name: validation num_bytes: 443787 num_examples: 3001 - name: test num_bytes: 347590 num_examples: 2759 download_size: 2251694 dataset_size: 2355286 - config_name: rosts-parallel-en-ro features: - name: translation dtype: translation: languages: - en - ro splits: - name: train num_bytes: 1563909 num_examples: 11499 - name: validation num_bytes: 443787 num_examples: 3001 - name: test num_bytes: 347590 num_examples: 2759 download_size: 2251694 dataset_size: 2355286 --- # Dataset Card for RO-STS-Parallel ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/dumitrescustefan/RO-STS) - **Repository:** [GitHub](https://github.com/dumitrescustefan/RO-STS) - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [email](dumitrescu.stefan@gmail.com) ### Dataset Summary We present RO-STS-Parallel - a Parallel Romanian-English dataset obtained by translating the [STS English dataset](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark) dataset into Romanian. It contains 17256 sentences in Romanian and English. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The text dataset is in Romanian and English (`ro`, `en`) ## Dataset Structure ### Data Instances An example looks like this: ``` { 'translation': { 'ro': 'Problema e si mai simpla.', 'en': 'The problem is simpler than that.' } } ``` ### Data Fields - translation: - ro: text in Romanian - en: text in English ### Data Splits The train/validation/test split contain 11,498/3,000/2,758 sentence pairs. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization *To construct the dataset, we first obtained automatic translations using Google's translation engine. These were then manually checked, corrected, and cross-validated by human volunteers. * #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information CC BY-SA 4.0 License ### Citation Information ``` @inproceedings{dumitrescu2021liro, title={Liro: Benchmark and leaderboard for romanian language tasks}, author={Dumitrescu, Stefan Daniel and Rebeja, Petru and Lorincz, Beata and Gaman, Mihaela and Avram, Andrei and Ilie, Mihai and Pruteanu, Andrei and Stan, Adriana and Rosia, Lorena and Iacobescu, Cristina and others}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)}, year={2021} } ``` ### Contributions Thanks to [@lorinczb](https://github.com/lorinczb) for adding this dataset.
sede
2022-11-18T21:44:41.000Z
[ "task_categories:token-classification", "task_ids:parsing", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:2106.05006", "arxiv:2005.02539", "re...
null
SEDE (Stack Exchange Data Explorer) is new dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their natural language description. It's based on a real usage of users from the Stack Exchange Data Explorer platform, which brings complexities and challenges never seen before in any other semantic parsing dataset like including complex nesting, dates manipulation, numeric and text manipulation, parameters, and most importantly: under-specification and hidden-assumptions. Paper (NLP4Prog workshop at ACL2021): https://arxiv.org/abs/2106.05006
@misc{hazoom2021texttosql, title={Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data}, author={Moshe Hazoom and Vibhor Malik and Ben Bogin}, year={2021}, eprint={2106.05006}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
2
3
--- pretty_name: SEDE (Stack Exchange Data Explorer) annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual paperswithcode_id: sede size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - parsing dataset_info: features: - name: QuerySetId dtype: uint32 - name: Title dtype: string - name: Description dtype: string - name: QueryBody dtype: string - name: CreationDate dtype: string - name: validated dtype: bool config_name: sede splits: - name: train num_bytes: 4410584 num_examples: 10309 - name: validation num_bytes: 380942 num_examples: 857 - name: test num_bytes: 386599 num_examples: 857 download_size: 6318959 dataset_size: 5178125 --- # Dataset Card for SEDE ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/hirupert/sede - **Paper:** https://arxiv.org/abs/2106.05006 - **Leaderboard:** https://paperswithcode.com/sota/text-to-sql-on-sede - **Point of Contact:** [email](moshe@hirupert.com) ### Dataset Summary SEDE (Stack Exchange Data Explorer) is a dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their natural language description. It's based on a real usage of users from the Stack Exchange Data Explorer platform, which brings complexities and challenges never seen before in any other semantic parsing dataset like including complex nesting, dates manipulation, numeric and text manipulation, parameters, and most importantly: under-specification and hidden-assumptions. ### Supported Tasks and Leaderboards - `parsing`: The dataset can be used to train a model for Text-to-SQL task. A Seq2Seq model (e.g. T5) can be used to solve the task. A model with more inductive-bias (e.g. a model with a grammar-based decoder) or an interactive settings for Text-to-SQL (https://arxiv.org/abs/2005.02539) can improve the results further. The model performance is measured by how high its [PCM-F1](https://arxiv.org/abs/2106.05006) score is. A [t5-large](https://huggingface.co/t5-large) achieves a [PCM-F1 of 50.6](https://arxiv.org/abs/2106.05006). ### Languages The text in the dataset is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances A typical data point comprises a question title, (optionally) a description and its underlying SQL query. In addition, each sample has a unique ID (from the Stack Exchange Data Explorer), its creation date and a boolean flag named `validated` if this sample was validated to be in gold quality by humans, see the paper for full details regarding the `validated` flag. An instance for example: ``` { 'QuerySetId':1233, 'Title':'Top 500 Askers on the site', 'Description':'A list of the top 500 askers of questions ordered by average answer score excluding community wiki closed posts.', 'QueryBody':'SELECT * FROM (\nSELECT \n TOP 500\n OwnerUserId as [User Link],\n Count(Posts.Id) AS Questions,\n CAST(AVG(CAST(Score AS float)) as numeric(6,2)) AS [Average Question Score]\nFROM\n Posts\nWHERE \n PostTypeId = 1 and CommunityOwnedDate is null and ClosedDate is null\nGROUP BY\n OwnerUserId\nORDER BY\n Count(Posts.Id) DESC\n)ORDER BY\n [Average Question Score] DESC', 'CreationDate':'2010-05-27 20:08:16', 'validated':true } ``` ### Data Fields - QuerySetId: a unique ID coming from the Stack Exchange Data Explorer. - Title: utterance title. - Description: utterance description (might be empty). - QueryBody: the underlying SQL query. - CreationDate: when this sample was created. - validated: `true` if this sample was validated to be in gold quality by humans. ### Data Splits The data is split into a training, validation and test set. The validation and test set contain only samples that were validated by humans to be in gold quality. Train Valid Test 10309 857 857 ## Dataset Creation ### Curation Rationale Most available semantic parsing datasets, comprising of pairs of natural utterances and logical forms, were collected solely for the purpose of training and evaluation of natural language understanding systems. As a result, they do not contain any of the richness and variety of natural-occurring utterances, where humans ask about data they need or are curious about. SEDE contains a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset. There is a large gap between the performance on SEDE compared to other common datasets, which leaves a room for future research for generalisation of Text-to-SQL models. ### Source Data #### Initial Data Collection and Normalization To introduce a realistic Text-to-SQL benchmark, we gather SQL queries together with their titles and descriptions from a naturally occurring dataset: the Stack Exchange Data Explorer. Stack Exchange is an online question & answers community, with over 3 million questions asked. However in its raw form many of the rows are duplicated or contain unusable queries or titles. The reason for this large difference between the original data size and the cleaned version is that any time that the author of the query executes it, an entry is saved to the log. To alleviate these issues, we write rule-based filters that remove bad queries/descriptions pairs with high precision. For example, we filter out examples with numbers in the description, if these numbers do not appear in the query (refer to the preprocessing script in the repository for the complete list of filters and the number of examples each of them filter). Whenever a query has multiple versions due to multiple executions, we take the last executed query which passed all filters. After this filtering step, we are left with 12,309 examples. Using these filters cleans most of the noise, but not all of it. To complete the cleaning process, we manually go over the examples in the validation and test sets, and either filter-out wrong examples or perform minimal changes to either the utterances or the queries (for example, fix a wrong textual value) to ensure that models are evaluated with correct data. The final number of all training, validation and test examples is 12,023. #### Who are the source language producers? The language producers are Stack Exchange Data Explorer (https://data.stackexchange.com/) users. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information All the data in the dataset is for public use. ## Considerations for Using the Data ### Social Impact of Dataset We hope that the release of this challenging dataset will encourage research on improving generalisation for real-world SQL prediction that will help non technical business users acquire the data they need from their company's database. ### Discussion of Biases [N/A] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was initially created by Moshe Hazoom, Vibhor Malik and Ben Bogin, during work done at Ruper. ### Licensing Information Apache-2.0 License ### Citation Information ``` @misc{hazoom2021texttosql, title={Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data}, author={Moshe Hazoom and Vibhor Malik and Ben Bogin}, year={2021}, eprint={2106.05006}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@Hazoom](https://github.com/Hazoom) for adding this dataset.
senti_ws
2023-01-25T14:44:03.000Z
[ "task_categories:token-classification", "task_categories:text-classification", "task_ids:text-scoring", "task_ids:sentiment-scoring", "task_ids:part-of-speech", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual"...
null
SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, and pos-tagging. The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1].
@INPROCEEDINGS{remquahey2010, title = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis}, booktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)}, author = {Remus, R. and Quasthoff, U. and Heyer, G.}, year = {2010} }
null
1
3
--- annotations_creators: - expert-generated - machine-generated language_creators: - found language: - de license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification - text-classification task_ids: - text-scoring - sentiment-scoring - part-of-speech pretty_name: SentiWS dataset_info: - config_name: pos-tagging features: - name: word dtype: string - name: pos-tag dtype: class_label: names: '0': NN '1': VVINF '2': ADJX '3': ADV splits: - name: train num_bytes: 75530 num_examples: 3471 download_size: 97748 dataset_size: 75530 - config_name: sentiment-scoring features: - name: word dtype: string - name: sentiment-score dtype: float32 splits: - name: train num_bytes: 61646 num_examples: 3471 download_size: 97748 dataset_size: 61646 --- # Dataset Card for SentiWS ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://wortschatz.uni-leipzig.de/en/download - **Repository:** [Needs More Information] - **Paper:** http://www.lrec-conf.org/proceedings/lrec2010/pdf/490_Paper.pdf - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, opinion mining etc. It lists positive and negative polarity bearing words weighted within the interval of [-1; 1] plus their part of speech tag, and if applicable, their inflections. The current version of SentiWS contains around 1,650 positive and 1,800 negative words, which sum up to around 16,000 positive and 18,000 negative word forms incl. their inflections, respectively. It not only contains adjectives and adverbs explicitly expressing a sentiment, but also nouns and verbs implicitly containing one. ### Supported Tasks and Leaderboards Sentiment-Scoring, Pos-Tagging ### Languages German ## Dataset Structure ### Data Instances For pos-tagging: ``` { "word":"Abbau" "pos_tag": 0 } ``` For sentiment-scoring: ``` { "word":"Abbau" "sentiment-score":-0.058 } ``` ### Data Fields SentiWS is UTF8-encoded text. For pos-tagging: - word: one word as a string, - pos_tag: the part-of-speech tag of the word as an integer, For sentiment-scoring: - word: one word as a string, - sentiment-score: the sentiment score of the word as a float between -1 and 1, The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1]. ### Data Splits train: 1,650 negative and 1,818 positive words ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License ### Citation Information @INPROCEEDINGS{remquahey2010, title = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis}, booktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)}, author = {Remus, R. and Quasthoff, U. and Heyer, G.}, year = {2010} } ### Contributions Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
sesotho_ner_corpus
2023-01-25T14:44:09.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:st", "license:other", "region:us" ]
null
Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags.
@inproceedings{sesotho_ner_corpus, author = {M. Setaka and Roald Eiselen}, title = {NCHLT Sesotho Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/334}, }
null
0
3
--- annotations_creators: - expert-generated language_creators: - found language: - st license: - other multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Sesotho NER Corpus license_details: Creative Commons Attribution 2.5 South Africa License dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': OUT '1': B-PERS '2': I-PERS '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-MISC '8': I-MISC config_name: sesotho_ner_corpus splits: - name: train num_bytes: 4502576 num_examples: 9472 download_size: 30421109 dataset_size: 4502576 --- # Dataset Card for Sesotho NER Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Sesotho Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/334) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za) ### Dataset Summary The Sesotho Ner Corpus is a Sesotho dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Sesotho language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Sesotho. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. ``` {'id': '0', 'ner_tags': [0, 0, 0, 0, 0], 'tokens': ['Morero', 'wa', 'weposaete', 'ya', 'Ditshebeletso'] } ``` ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Sesotho. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from gov.za websites. #### Who are the source language producers? The data was produced by writers of South African government websites - gov.za [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated during the NCHLT text resource development project. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: [more information](http://www.nwu.ac.za/ctext) ### Licensing Information The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode) ### Citation Information ``` @inproceedings{sesotho_ner_corpus, author = {M. Setaka and Roald Eiselen}, title = {NCHLT Sesotho Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/334}, } ``` ### Contributions Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
setswana_ner_corpus
2023-01-25T14:44:12.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:tn", "license:other", "region:us" ]
null
Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags.
@inproceedings{sepedi_ner_corpus, author = {S.S.B.M. Phakedi and Roald Eiselen}, title = {NCHLT Setswana Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/341}, }
null
0
3
--- annotations_creators: - expert-generated language_creators: - found language: - tn license: - other multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Setswana NER Corpus license_details: Creative Commons Attribution 2.5 South Africa License dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': OUT '1': B-PERS '2': I-PERS '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-MISC '8': I-MISC config_name: setswana_ner_corpus splits: - name: train num_bytes: 3874793 num_examples: 7944 download_size: 25905236 dataset_size: 3874793 --- # Dataset Card for Setswana NER Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Setswana Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/319) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za) ### Dataset Summary The Setswana Ner Corpus is a Setswana dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Setswana language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Setswana. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. ``` {'id': '0', 'ner_tags': [0, 0, 0, 0, 0], 'tokens': ['Ka', 'dinako', 'dingwe', ',', 'go'] } ``` ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - setswana. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from gov.za websites. [More Information Needed] #### Who are the source language producers? The data was produced by writers of South African government websites - gov.za [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated during the NCHLT text resource development project. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: [more information](http://www.nwu.ac.za/ctext) ### Licensing Information The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode) ### Citation Information ``` @inproceedings{sepedi_ner_corpus, author = {S.S.B.M. Phakedi and Roald Eiselen}, title = {NCHLT Setswana Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/341}, } ``` ### Contributions Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
siswati_ner_corpus
2023-01-25T14:44:23.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ss", "license:other", "region:us" ]
null
Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags.
@inproceedings{siswati_ner_corpus, author = {B.B. Malangwane and M.N. Kekana and S.S. Sedibe and B.C. Ndhlovu and Roald Eiselen}, title = {NCHLT Siswati Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/346}, }
null
0
3
--- annotations_creators: - expert-generated language_creators: - found language: - ss license: - other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Siswati NER Corpus license_details: Creative Commons Attribution 2.5 South Africa License dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': OUT '1': B-PERS '2': I-PERS '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-MISC '8': I-MISC config_name: siswati_ner_corpus splits: - name: train num_bytes: 3517151 num_examples: 10798 download_size: 21882224 dataset_size: 3517151 --- # Dataset Card for Siswati NER Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Siswati Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/346) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za) ### Dataset Summary The Siswati Ner Corpus is a Siswati dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Siswati language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Siswati. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. ``` {'id': '0', 'ner_tags': [0, 0, 0, 0, 0], 'tokens': ['Tinsita', 'tebantfu', ':', 'tinsita', 'tetakhamiti'] } ``` ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - siswati. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from gov.za websites. #### Who are the source language producers? The data was produced by writers of South African government websites - gov.za [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated during the NCHLT text resource development project. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: [more information](http://www.nwu.ac.za/ctext) ### Licensing Information The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode) ### Citation Information ``` @inproceedings{siswati_ner_corpus, author = {B.B. Malangwane and M.N. Kekana and S.S. Sedibe and B.C. Ndhlovu and Roald Eiselen}, title = {NCHLT Siswati Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/346}, } ``` ### Contributions Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
smartdata
2023-01-25T14:44:26.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:de", "license:cc-by-4.0", "region:us" ]
null
DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. It has also been annotated with a set of 15 traffic- and industry-related n-ary relations and events, such as Accidents, Traffic jams, Acquisitions, and Strikes. The corpus consists of newswire texts, Twitter messages, and traffic reports from radio stations, police and railway companies. It allows for training and evaluating both named entity recognition algorithms that aim for fine-grained typing of geo-entities, as well as n-ary relation extraction systems.
@InProceedings{SCHIERSCH18.85, author = {Martin Schiersch and Veselina Mironova and Maximilian Schmitt and Philippe Thomas and Aleksandra Gabryszak and Leonhard Hennig}, title = "{A German Corpus for Fine-Grained Named Entity Recognition and Relation Extraction of Traffic and Industry Events}", booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {May 7-12, 2018}, address = {Miyazaki, Japan}, editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga}, publisher = {European Language Resources Association (ELRA)}, isbn = {979-10-95546-00-9}, language = {english} }
null
1
3
--- annotations_creators: - expert-generated language_creators: - found language: - de license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: SmartData dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-DATE '2': I-DATE '3': B-DISASTER_TYPE '4': I-DISASTER_TYPE '5': B-DISTANCE '6': I-DISTANCE '7': B-DURATION '8': I-DURATION '9': B-LOCATION '10': I-LOCATION '11': B-LOCATION_CITY '12': I-LOCATION_CITY '13': B-LOCATION_ROUTE '14': I-LOCATION_ROUTE '15': B-LOCATION_STOP '16': I-LOCATION_STOP '17': B-LOCATION_STREET '18': I-LOCATION_STREET '19': B-NUMBER '20': I-NUMBER '21': B-ORGANIZATION '22': I-ORGANIZATION '23': B-ORGANIZATION_COMPANY '24': I-ORGANIZATION_COMPANY '25': B-ORG_POSITION '26': I-ORG_POSITION '27': B-PERSON '28': I-PERSON '29': B-TIME '30': I-TIME '31': B-TRIGGER '32': I-TRIGGER config_name: smartdata-v3_20200302 splits: - name: train num_bytes: 2124312 num_examples: 1861 - name: test num_bytes: 266529 num_examples: 230 - name: validation num_bytes: 258681 num_examples: 228 download_size: 18880782 dataset_size: 2649522 --- # Dataset Card for SmartData ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.dfki.de/web/forschung/projekte-publikationen/publikationen-uebersicht/publikation/9427/ - **Repository:** https://github.com/DFKI-NLP/smartdata-corpus - **Paper:** https://www.dfki.de/fileadmin/user_upload/import/9427_lrec_smartdata_corpus.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. It has also been annotated with a set of 15 traffic- and industry-related n-ary relations and events, such as Accidents, Traffic jams, Acquisitions, and Strikes. The corpus consists of newswire texts, Twitter messages, and traffic reports from radio stations, police and railway companies. It allows for training and evaluating both named entity recognition algorithms that aim for fine-grained typing of geo-entities, as well as n-ary relation extraction systems. ### Supported Tasks and Leaderboards NER ### Languages German ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - id: an identifier for the article the text came from - tokens: a list of string tokens for the text of the article - ner_tags: a corresponding list of NER tags in the BIO format ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC-BY 4.0 ### Citation Information ``` @InProceedings{SCHIERSCH18.85, author = {Martin Schiersch and Veselina Mironova and Maximilian Schmitt and Philippe Thomas and Aleksandra Gabryszak and Leonhard Hennig}, title = "{A German Corpus for Fine-Grained Named Entity Recognition and Relation Extraction of Traffic and Industry Events}", booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {May 7-12, 2018}, address = {Miyazaki, Japan}, editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga}, publisher = {European Language Resources Association (ELRA)}, isbn = {979-10-95546-00-9}, language = {english} } ``` ### Contributions Thanks to [@aseifert](https://github.com/aseifert) for adding this dataset.
srwac
2022-11-03T16:08:14.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:original", "language:sr",...
null
The Serbian web corpus srWaC was built by crawling the .rs top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Serbian vs. Croatian). Version 1.0 of this corpus is described in http://www.aclweb.org/anthology/W14-0405. Version 1.1 contains newer and better linguistic annotations.
@misc{11356/1063, title = {Serbian web corpus {srWaC} 1.1}, author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip}, url = {http://hdl.handle.net/11356/1063}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)}, year = {2016} }
null
1
3
--- annotations_creators: - no-annotation language_creators: - found language: - sr license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 100M<n<1B source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: SrWac dataset_info: features: - name: sentence dtype: string config_name: srwac splits: - name: train num_bytes: 17470890484 num_examples: 688805174 download_size: 3767312759 dataset_size: 17470890484 --- # Dataset Card for SrWac ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.ffzg.hr/resources/corpora/srwac/ - **Repository:** https://www.clarin.si/repository/xmlui/handle/11356/1063 - **Paper:** http://nlp.ffzg.hr/data/publications/nljubesi/ljubesic14-bs.pdf - **Leaderboard:** - **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr) ### Dataset Summary The Serbian web corpus srWaC was built by crawling the .rs top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Serbian vs. Croatian). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Dataset is monolingual in Serbian language. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information ``` @misc{11356/1063, title = {Serbian web corpus {srWaC} 1.1}, author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip}, url = {http://hdl.handle.net/11356/1063}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)}, year = {2016} } ``` ### Contributions Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
turku_ner_corpus
2023-01-25T14:54:48.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fi", "license:cc-by-nc-sa-4.0", "region:us"...
null
An open, broad-coverage corpus for Finnish named entity recognition presented in Luoma et al. (2020) A Broad-coverage Corpus for Finnish Named Entity Recognition.
@inproceedings{luoma-etal-2020-broad, title = "A Broad-coverage Corpus for {F}innish Named Entity Recognition", author = {Luoma, Jouni and Oinonen, Miika and Pyyk{\"o}nen, Maria and Laippala, Veronika and Pyysalo, Sampo}, booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference", year = "2020", url = "https://www.aclweb.org/anthology/2020.lrec-1.567", pages = "4615--4624", }
null
0
3
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - fi license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Turku NER corpus dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': B-DATE '1': B-EVENT '2': B-LOC '3': B-ORG '4': B-PER '5': B-PRO '6': I-DATE '7': I-EVENT '8': I-LOC '9': I-ORG '10': I-PER '11': I-PRO '12': O splits: - name: train num_bytes: 3257447 num_examples: 12217 - name: validation num_bytes: 364223 num_examples: 1364 - name: test num_bytes: 416644 num_examples: 1555 download_size: 1659911 dataset_size: 4038314 --- # Dataset Card for Turku NER corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://turkunlp.org/fin-ner.html - **Repository:** https://github.com/TurkuNLP/turku-ner-corpus/ - **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.567/ - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** {jouni.a.luoma,mhtoin,maria.h.pyykonen,mavela,sampo.pyysalo}@utu.f ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
twi_text_c3
2022-11-03T16:15:20.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:t...
null
Twi Text C3 is the largest Twi texts collected and used to train FastText embeddings in the YorubaTwi Embedding paper: https://www.aclweb.org/anthology/2020.lrec-1.335/
@inproceedings{alabi-etal-2020-massive, title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of Yoruba and {T}wi", author = "Alabi, Jesujoba and Amponsah-Kaakyire, Kwabena and Adelani, David and Espa{\\~n}a-Bonet, Cristina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.335", pages = "2754--2762", language = "English", ISBN = "979-10-95546-34-4", }
null
0
3
--- annotations_creators: - expert-generated language_creators: - found language: - tw license: - cc-by-nc-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: Twi Text C3 dataset_info: features: - name: text dtype: string config_name: plain_text splits: - name: train num_bytes: 71198430 num_examples: 675772 download_size: 69170842 dataset_size: 71198430 --- # Dataset Card for Twi Text C3 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.aclweb.org/anthology/2020.lrec-1.335 - **Repository:** https://github.com/ajesujoba/YorubaTwi-Embedding/ - **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.335 - **Leaderboard:** - **Point of Contact:** [Kwabena Amponsah-Kaakyire](mailto:s8kwampo@stud.uni-saarland.de) ### Dataset Summary Twi Text C3 was collected from various sources from the web (Bible, JW300, wikipedia, etc) to compare pre-trained word embeddings (Fasttext) and embeddings and embeddings trained on curated Twi Texts. The dataset consists of clean texts (i.e the Bible) and noisy texts (with incorrect orthography and mixed dialects) from other online sources like Wikipedia and JW300 ### Supported Tasks and Leaderboards For training word embeddings and language models on Twi texts. ### Languages The language supported is Twi. ## Dataset Structure ### Data Instances A data point is a sentence in each line. { 'text': 'mfitiaseɛ no onyankopɔn bɔɔ ɔsoro ne asaase' } ### Data Fields - `text`: a `string` feature. a sentence text per line ### Data Splits Contains only the training split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Twi. ### Source Data #### Initial Data Collection and Normalization The dataset comes from various sources of the web: Bible, JW300, and wikipedia. See Table 1 in the [paper](https://www.aclweb.org/anthology/2020.lrec-1.335/) for the summary of the dataset and statistics #### Who are the source language producers? [Jehovah Witness](https://www.jw.org/) (JW300) [Twi Bible](http://www.bible.com/) [Yorùbá Wikipedia](dumps.wikimedia.org/twwiki) ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The dataset is biased to the religion domain (Christianity) because of the inclusion of JW300 and the Bible. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The data sets were curated by Kwabena Amponsah-Kaakyire, Jesujoba Alabi, and David Adelani, students of Saarland University, Saarbrücken, Germany . ### Licensing Information The data is under the [Creative Commons Attribution-NonCommercial 4.0 ](https://creativecommons.org/licenses/by-nc/4.0/legalcode) ### Citation Information ``` @inproceedings{alabi-etal-2020-massive, title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\`u}b{\'a} and {T}wi", author = "Alabi, Jesujoba and Amponsah-Kaakyire, Kwabena and Adelani, David and Espa{\~n}a-Bonet, Cristina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.335", pages = "2754--2762", abstract = "The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yor{\`u}b{\'a} and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yor{\`u}b{\'a} and Twi. We extend the analysis to contextual word embeddings and evaluate multilingual BERT on a named entity recognition task. For this, we annotate with named entities the Global Voices corpus for Yor{\`u}b{\'a}. As output of the work, we provide corpora, embeddings and the test suits for both languages.", language = "English", ISBN = "979-10-95546-34-4", } ``` ### Contributions Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
urdu_sentiment_corpus
2023-01-25T15:02:01.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ur", "license:unknown", "region:us" ]
null
“Urdu Sentiment Corpus” (USC) shares the dat of Urdu tweets for the sentiment analysis and polarity detection. The dataset is consisting of tweets and overall, the dataset is comprising over 17, 185 tokens with 52% records as positive, and 48 % records as negative.
@inproceedings{khan2020usc, title={Urdu Sentiment Corpus (v1.0): Linguistic Exploration and Visualization of Labeled Datasetfor Urdu Sentiment Analysis.}, author={Khan, Muhammad Yaseen and Nizami, Muhammad Suffian}, booktitle={2020 IEEE 2nd International Conference On Information Science & Communication Technology (ICISCT)}, pages={}, year={2020}, organization={IEEE} }
null
1
3
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - ur license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: urdu-sentiment-corpus pretty_name: Urdu Sentiment Corpus (USC) dataset_info: features: - name: sentence dtype: string - name: sentiment dtype: class_label: names: '0': P '1': N '2': O splits: - name: train num_bytes: 161190 num_examples: 1000 download_size: 51583 dataset_size: 161190 --- # Dataset Card for Urdu Sentiment Corpus (USC) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus) - **Repository:** [Github](https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus) - **Paper:** [IEEE](https://ieeexplore.ieee.org/abstract/document/9080043) - **Leaderboard:** - **Point of Contact:** [Muhammad Yaseen Khan](https://github.com/MuhammadYaseenKhan) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sentences: The Urdu tweet - sentiment: The sentiment that was exhibited in the tweet, which can be Positive(P) or Negative(N) or Objective(O). ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
wiki_qa_ar
2023-01-25T15:02:18.000Z
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "license:unknown", "region:us" ]
null
Arabic Version of WikiQA by automatic automatic machine translators and crowdsourced the selection of the best one to be incorporated into the corpus
@InProceedings{YangYihMeek:EMNLP2015:WikiQA, author = {{Yi}, Yang and {Wen-tau}, Yih and {Christopher} Meek}, title = "{WikiQA: A Challenge Dataset for Open-Domain Question Answering}", journal = {Association for Computational Linguistics}, year = 2015, doi = {10.18653/v1/D15-1237}, pages = {2013–2018}, }
null
2
3
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: wikiqaar pretty_name: English-Arabic Wikipedia Question-Answering dataset_info: features: - name: question_id dtype: string - name: question dtype: string - name: document_id dtype: string - name: answer_id dtype: string - name: answer dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' config_name: plain_text splits: - name: test num_bytes: 7563127 num_examples: 20632 - name: validation num_bytes: 3740721 num_examples: 10387 - name: train num_bytes: 26009979 num_examples: 70264 download_size: 35226436 dataset_size: 37313827 --- # Dataset Card for WikiQAar ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WikiQaAr](https://github.com/qcri/WikiQAar) - **Repository:** [WikiQaAr](https://github.com/qcri/WikiQAar) - **Paper:** - **Point of Contact:** [Ines Abbes ](abbes.ines@yahoo.com) ### Dataset Summary Arabic Version of WikiQA by automatic automatic machine translators and crowdsourced the selection of the best one to be incorporated into the corpus ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances Each data point contains the question and whether the answer is a valid or not. ### Data Fields - `question_id`: the question id. - `question`: the question text. - `document_id`: the wikipedia document id. - `answer_id` : the answer id. - `answer` : a candidate answer to the question. - `label` : 1 if the `answer` is correct or 0 otherwise. ### Data Splits The dataset is not split. | | train | validation | test | |------------|-------:|-----------:|-------:| | Data split | 70,264 | 20,632 | 10,387 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Translation of WikiQA. #### Who are the source language producers? Translation of WikiQA. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @InProceedings{YangYihMeek:EMNLP2015:WikiQA, author = {{Yi}, Yang and {Wen-tau}, Yih and {Christopher} Meek}, title = "{WikiQA: A Challenge Dataset for Open-Domain Question Answering}", journal = {Association for Computational Linguistics}, year = 2015, doi = {10.18653/v1/D15-1237}, pages = {2013–2018}, } ``` ### Contributions Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
wiki_source
2022-11-03T16:07:54.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:sv", "license:unknown", "region:us" ]
null
2 languages, total number of files: 132 total number of tokens: 1.80M total number of sentence fragments: 78.36k
@InProceedings{TIEDEMANN12.463, author = {J{\"o}rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} }
null
0
3
--- annotations_creators: - found language_creators: - found language: - en - sv license: - unknown multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: WikiSource dataset_info: features: - name: id dtype: string - name: translation dtype: translation: languages: - en - sv config_name: en-sv splits: - name: train num_bytes: 8153542 num_examples: 33283 download_size: 2375052 dataset_size: 8153542 --- # Dataset Card for WikiSource ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/WikiSource.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
wisesight1000
2023-06-14T08:20:50.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:extended|wisesight_sentiment", "language:th", "license:cc0-1.0", "word-tokenization", "region:us" ]
null
`wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (neutral), `pos` (positive), `q` (question), 250 samples each. Some texts are removed because they look like spam.Because these samples are representative of real world content, we believe having these annotaed samples will allow the community to robustly evaluate tokenization algorithms.
@software{bact_2019_3457447, author = {Suriyawongkul, Arthit and Chuangsuwanich, Ekapol and Chormai, Pattarawat and Polpanumas, Charin}, title = {PyThaiNLP/wisesight-sentiment: First release}, month = sep, year = 2019, publisher = {Zenodo}, version = {v1.0}, doi = {10.5281/zenodo.3457447}, url = {https://doi.org/10.5281/zenodo.3457447} }
null
0
3
--- annotations_creators: - expert-generated language_creators: - found language: - th license: - cc0-1.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - extended|wisesight_sentiment task_categories: - token-classification task_ids: [] pretty_name: wisesight1000 tags: - word-tokenization dataset_info: features: - name: char sequence: string - name: char_type sequence: class_label: names: '0': b_e '1': c '2': d '3': n '4': o '5': p '6': q '7': s '8': s_e '9': t '10': v '11': w - name: is_beginning sequence: class_label: names: '0': neg '1': pos config_name: wisesight1000 splits: - name: train num_bytes: 1735438 num_examples: 993 download_size: 222691 dataset_size: 1735438 --- # Dataset Card for `wisesight1000` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/PyThaiNLP/wisesight-sentiment - **Repository:** https://github.com/PyThaiNLP/wisesight-sentiment/blob/master/word-tokenization/ - **Paper:** - **Leaderboard:** - **Point of Contact:** https://github.com/PyThaiNLP/ ### Dataset Summary `wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (neutral), `pos` (positive), `q` (question), 250 samples each. Some texts are removed because they look like spam. Because these samples are representative of real world content, we believe having these annotaed samples will allow the community to robustly evaluate tokenization algorithms. ### Supported Tasks and Leaderboards word tokenization ### Languages Thai ## Dataset Structure ### Data Instances ``` {'char': ['E', 'u', 'c', 'e', 'r', 'i', 'n', ' ', 'p', 'r', 'o', ' ', 'a', 'c', 'n', 'e', ' ', 'ค', '่', 'ะ', ' ', 'ใ', 'ช', '้', 'แ', 'ล', '้', 'ว', 'ส', 'ิ', 'ว', 'ข', 'ึ', '้', 'น', 'เ', 'พ', 'ิ', '่', 'ม', 'ท', 'ุ', 'ก', 'ว', 'ั', 'น', ' ', 'ม', 'า', 'ด', 'ู', 'ก', 'ั', 'น', 'น', 'ะ', 'ค', 'ะ', ' ', 'ว', '่', 'า', 'จ', 'ั', 'ด', 'ก', 'า', 'ร', 'ป', 'ั', 'ญ', 'ห', 'า', 'ส', 'ิ', 'ว', 'ใ', 'น', '7', 'ว', 'ั', 'น', 'ไ', 'ด', '้', 'ร', 'ึ', 'ม', 'ั', '่', 'ย', 'ย', 'ย', 'ย', 'ย', 'ย', 'ย', 'ย', ' ', 'ล', '่', 'า', 'ส', 'ุ', 'ด', 'ไ', 'ป', 'ล', '้', 'า', 'ง', 'ห', 'น', '้', '…', '\n'], 'char_type': [0, 8, 8, 8, 8, 8, 8, 5, 8, 8, 8, 5, 8, 8, 8, 8, 5, 1, 9, 10, 5, 11, 1, 9, 11, 1, 9, 1, 1, 10, 1, 1, 10, 9, 1, 11, 1, 10, 9, 1, 1, 10, 1, 1, 4, 1, 5, 1, 10, 1, 10, 1, 4, 1, 1, 10, 1, 10, 5, 1, 9, 10, 1, 4, 1, 1, 10, 1, 1, 4, 1, 3, 10, 1, 10, 1, 11, 1, 2, 1, 4, 1, 11, 1, 9, 1, 10, 1, 4, 9, 1, 1, 1, 1, 1, 1, 1, 1, 5, 1, 9, 10, 1, 10, 1, 11, 1, 1, 9, 10, 1, 3, 1, 9, 4, 4], 'is_beginning': [1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0]} {'char': ['แ', 'พ', 'ง', 'เ', 'ว', '่', 'อ', 'ร', '์', ' ', 'เ', 'บ', 'ี', 'ย', 'ร', '์', 'ช', '้', 'า', 'ง', 'ต', '้', 'น', 'ท', 'ุ', 'น', 'ข', 'ว', 'ด', 'ล', 'ะ', 'ไ', 'ม', '่', 'ถ', 'ึ', 'ง', ' ', '5', '0', ' ', 'ข', 'า', 'ย', ' ', '1', '2', '0', ' ', '😰', '😰', '😰', '์', '\n'], 'char_type': [11, 1, 1, 11, 1, 9, 1, 1, 7, 5, 11, 1, 10, 1, 1, 7, 1, 9, 10, 1, 1, 9, 1, 1, 10, 1, 1, 1, 1, 1, 10, 11, 1, 9, 1, 10, 1, 5, 2, 2, 5, 1, 10, 1, 5, 2, 2, 2, 5, 4, 4, 4, 7, 4], 'is_beginning': [1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0]} ``` ### Data Fields - `char`: characters - `char_type`: character types as adopted from []() by [deepcut](https://github.com/rkcosmos/deepcut) - `is_beginning`: 1 if beginning of word else 0 ### Data Splits No explicit split is given. ## Dataset Creation ### Curation Rationale The dataset was created from `wisesight-sentiment` to be a word tokenization benchmark that is closer to texts in the wild, since other Thai word tokenization datasets such as [BEST](https://aiforthai.in.th/corpus.php) are mostly texts from news articles, which do not have some real-world features like misspellings. ### Source Data #### Initial Data Collection and Normalization The data are sampled from `wisesight-sentiment` which has the following data collection and normalization: - Style: Informal and conversational. With some news headlines and advertisement. - Time period: Around 2016 to early 2019. With small amount from other period. - Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs. - Privacy: - Only messages that made available to the public on the internet (websites, blogs, social network sites). - For Facebook, this means the public comments (everyone can see) that made on a public page. - Private/protected messages and messages in groups, chat, and inbox are not included. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. - Alternations and modifications: - Keep in mind that this corpus does not statistically represent anything in the language register. - Large amount of messages are not in their original form. Personal data are removed or masked. - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact. - (Mis)spellings are kept intact. - Messages longer than 2,000 characters are removed. - Long non-Thai messages are removed. Duplicated message (exact match) are removed. #### Who are the source language producers? Social media users in Thailand ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The annotation was done by several people, including Nitchakarn Chantarapratin, [Pattarawat Chormai](https://github.com/heytitle), [Ponrawee Prasertsom](https://github.com/ponrawee), [Jitkapat Sawatphol](https://github.com/jitkapat), [Nozomi Yamada](https://github.com/nozomiyamada), and [Attapol Rutherford](https://attapol.github.io/). ### Personal and Sensitive Information - The authors tried to exclude any known personally identifiable information from this data set. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. ## Considerations for Using the Data ### Social Impact of Dataset - word tokenization dataset from texts in the wild ### Discussion of Biases - no guideline is given by the authors on word tokenization ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Thanks [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) community, [Kitsuchart Pasupa](http://www.it.kmitl.ac.th/~kitsuchart/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https://www.kaggle.com/c/wisesight-sentiment/ ### Licensing Information CC0 ### Citation Information Dataset: ``` @software{bact_2019_3457447, author = {Suriyawongkul, Arthit and Chuangsuwanich, Ekapol and Chormai, Pattarawat and Polpanumas, Charin}, title = {PyThaiNLP/wisesight-sentiment: First release}, month = sep, year = 2019, publisher = {Zenodo}, version = {v1.0}, doi = {10.5281/zenodo.3457447}, url = {https://doi.org/10.5281/zenodo.3457447} } ``` Character type features: ``` @inproceedings{haruechaiyasak2009tlex, title={TLex: Thai lexeme analyser based on the conditional random fields}, author={Haruechaiyasak, Choochart and Kongyoung, Sarawoot}, booktitle={Proceedings of 8th International Symposium on Natural Language Processing}, year={2009} } ``` ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
wmt20_mlqe_task3
2023-01-25T15:02:49.000Z
[ "task_categories:translation", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:extended|amazon_us_reviews", "language:en", "language:fr", "license:unknown", "...
null
This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task. The goal of this task 3 is to predict document-level quality scores as well as fine-grained annotations.
Not available.
null
0
3
--- annotations_creators: - expert-generated - machine-generated language_creators: - found language: - en - fr license: - unknown multilinguality: - translation size_categories: - 1K<n<10K source_datasets: - extended|amazon_us_reviews task_categories: - translation task_ids: [] pretty_name: WMT20 - MultiLingual Quality Estimation (MLQE) Task3 dataset_info: features: - name: document_id dtype: string - name: source_segments sequence: string - name: source_tokenized sequence: string - name: mt_segments sequence: string - name: mt_tokenized sequence: string - name: annotations sequence: - name: segment_id sequence: int32 - name: annotation_start sequence: int32 - name: annotation_length sequence: int32 - name: severity dtype: class_label: names: '0': minor '1': major '2': critical - name: severity_weight dtype: float32 - name: category dtype: class_label: names: '0': Addition '1': Agreement '2': Ambiguous Translation '3': Capitalization '4': Character Encoding '5': Company Terminology '6': Date/Time '7': Diacritics '8': Duplication '9': False Friend '10': Grammatical Register '11': Hyphenation '12': Inconsistency '13': Lexical Register '14': Lexical Selection '15': Named Entity '16': Number '17': Omitted Auxiliary Verb '18': Omitted Conjunction '19': Omitted Determiner '20': Omitted Preposition '21': Omitted Pronoun '22': Orthography '23': Other POS Omitted '24': Over-translation '25': Overly Literal '26': POS '27': Punctuation '28': Shouldn't Have Been Translated '29': Shouldn't have been translated '30': Spelling '31': Tense/Mood/Aspect '32': Under-translation '33': Unidiomatic '34': Unintelligible '35': Unit Conversion '36': Untranslated '37': Whitespace '38': Word Order '39': Wrong Auxiliary Verb '40': Wrong Conjunction '41': Wrong Determiner '42': Wrong Language Variety '43': Wrong Preposition '44': Wrong Pronoun - name: token_annotations sequence: - name: segment_id sequence: int32 - name: first_token sequence: int32 - name: last_token sequence: int32 - name: token_after_gap sequence: int32 - name: severity dtype: class_label: names: '0': minor '1': major '2': critical - name: category dtype: class_label: names: '0': Addition '1': Agreement '2': Ambiguous Translation '3': Capitalization '4': Character Encoding '5': Company Terminology '6': Date/Time '7': Diacritics '8': Duplication '9': False Friend '10': Grammatical Register '11': Hyphenation '12': Inconsistency '13': Lexical Register '14': Lexical Selection '15': Named Entity '16': Number '17': Omitted Auxiliary Verb '18': Omitted Conjunction '19': Omitted Determiner '20': Omitted Preposition '21': Omitted Pronoun '22': Orthography '23': Other POS Omitted '24': Over-translation '25': Overly Literal '26': POS '27': Punctuation '28': Shouldn't Have Been Translated '29': Shouldn't have been translated '30': Spelling '31': Tense/Mood/Aspect '32': Under-translation '33': Unidiomatic '34': Unintelligible '35': Unit Conversion '36': Untranslated '37': Whitespace '38': Word Order '39': Wrong Auxiliary Verb '40': Wrong Conjunction '41': Wrong Determiner '42': Wrong Language Variety '43': Wrong Preposition '44': Wrong Pronoun - name: token_index sequence: sequence: sequence: int32 - name: total_words dtype: int32 config_name: plain_text splits: - name: train num_bytes: 10762355 num_examples: 1448 - name: test num_bytes: 745260 num_examples: 180 - name: validation num_bytes: 1646596 num_examples: 200 download_size: 3534634 dataset_size: 13154211 --- # Dataset Card for WMT20 - MultiLingual Quality Estimation (MLQE) Task3 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WMT20 Quality Estimation Shared Task](http://www.statmt.org/wmt20/quality-estimation-task.html) - **Repository**: [Github repository](https://github.com/deep-spin/deep-spin.github.io/tree/master/docs/data/wmt2020_qe) - **Paper:** *Not available* ### Dataset Summary From the homepage: *This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task.* *The goal of this task 3 is to predict document-level quality scores as well as fine-grained annotations.* *Each document has a product title and its description, and is annotated for translation errors according to the MQM framework. Each error annotation has:* - ***Word span(s).*** *Errors may consist of one or more words, not necessarily contiguous.* - ***Severity.*** *An error can be minor (if it doesn't lead to a loss of meaning and it doesn't confuse or mislead the user), major (if it changes the meaning) or critical (if it changes the meaning and carry any type of implication, or could be seen as offensive).* - ***Type.*** *A label specifying the error type, such as wrong word order, missing words, agreement, etc. They may provide additional information, but systems don't need to predict them.* ### Supported Tasks and Leaderboards From the homepage: *Submissions will be evaluated as in Task 1, in terms of Pearson's correlation between the true and predicted MQM document-level scores. Additionally, the predicted annotations will be evaluated in terms of their F1 scores with respect to the gold annotations. The [official evaluation scripts](https://github.com/sheffieldnlp/qe-eval-scripts) are available.* ### Languages There is a single language pair in the dataset: English (`en`) - French (`fr`). ## Dataset Structure ### Data Instances An example looks like this: ``` { 'document_id': 'B0000568SY', 'source_segments': ['Razor Scooter Replacement Wheels Set with Bearings', 'Scooter Wheels w/Bearings-Blue'], 'source_tokenized': ['Razor Scooter Replacement Wheels Set with Bearings', 'Scooter Wheels w / Bearings-Blue'], 'mt_segments': ['Roues de rechange Razor Scooter sertie de roulements', 'Roues de scooter w/roulements-bleu'], 'mt_tokenized': ['Roues de rechange Razor Scooter sertie de roulements', 'Roues de scooter w / roulements-bleu'], 'annotations': { 'segment_id': [[0], [1], [1], [0, 0], [0], [1], [1]], 'annotation_start': [[42], [19], [9], [0, 32], [9], [17], [30]], 'annotation_length': [[10], [10], [7], [5, 6], [8], [1], [4]], 'severity': [0, 0, 0, 0, 0, 1, 0], 'severity_weight': [1.0, 1.0, 1.0, 1.0, 1.0, 5.0, 1.0] 'category': [3, 3, 3, 1, 3, 36, 3], }, 'token_annotations': { 'category': [3, 3, 3, 1, 3, 36, 3], 'first_token': [[7], [5], [2], [0, 5], [2], [3], [5]], 'last_token': [[7], [5], [2], [0, 5], [2], [3], [5]], 'segment_id': [[0], [1], [1], [0, 0], [0], [1], [1]], 'severity': [0, 0, 0, 0, 0, 1, 0], 'token_after_gap': [[-1], [-1], [-1], [-1, -1], [-1], [-1], [-1]] }, 'token_index': [[[0, 5], [6, 2], [9, 8], [18, 5], [24, 7], [32, 6], [39, 2], [42, 10]], [[0, 5], [6, 2], [9, 7], [17, 1], [18, 1], [19, 15]]], 'total_words': 16 } ``` ### Data Fields - `document_id`: the document id (name of the folder). - `source_segments`: the original source text, one sentence per line (i.e. per element of the list). - `source_tokenized`: a tokenized version of `source_segments`. - `mt_segments`: the original machine-translated text, one sentence per line (i.e. per element of the list). - `mt_tokenized`: a tokenized version of `mt_segments`. Default value is `[]` when this information is not available (it happens 3 times in the train set: `B0001BW0PQ`, `B0001GS19U` and `B000A6SMJ0`). - `annotations`: error annotations for the document. Each item of the list corresponds to an error annotation, which in turn may contain one or more error spans. Error fields are encoded in a dictionary. In the case of a multi-span error, multiple starting positions and lengths are encoded in the list. Note that these positions points to `mt.segments`, not `mt_tokenized`. - `segment_id`: List of list of integers. Id of each error. - `annotation_start`: List of list of integers. Start of each error. - `annotation_length`: List of list of intergers. Length of each error. - `severity`: List of one hot. Severity category of each error. - `severity_weight`: List of floats. Severity weight of each error. - `category`: List of one hot. Category of each error. See the 45 categories in `_ANNOTATION_CATEGORIES_MAPPING`. - `token_annotations`: tokenized version of `annotations`. Each error span that contains one or more tokens has a "first token" and "last token". Again, multi-span errors have their first and last tokens encoded in a list. When a span is over a gap between two tokens, the "first" and "last" positions are `-1` (encoded as `-` in the original data), and instead the `token_after_gap` column points to the token immediately after the gap. In case of a gap occurring at the end of the sentence, this value will be equal to the number of tokens. - `segment_id`: List of list of integers. Id of each error. - `first_token`: List of list of integers. Start of each error. - `last_token`: List of list of intergers. End of each error. - `token_after_gap`: List of list of integers. Token after gap of each error. - `severity`: List of one hot. Severity category of each error. - `category`: List of one hot. Category of each error. See the 45 categories in `_ANNOTATION_CATEGORIES_MAPPING`. - `token_index`: a mapping of tokens to their start and ending positions in `mt_segments`. For each token, a start and end value are encoded in a list of length 2, and all tokens represent one item in the list. - `total_words`: total number of words in the document ``` _ANNOTATION_CATEGORIES_MAPPING = { 0: 'Addition', 1: 'Agreement', 2: 'Ambiguous Translation', 3: 'Capitalization', 4: 'Character Encoding', 5: 'Company Terminology', 6: 'Date/Time', 7: 'Diacritics', 8: 'Duplication', 9: 'False Friend', 10: 'Grammatical Register', 11: 'Hyphenation', 12: 'Inconsistency', 13: 'Lexical Register', 14: 'Lexical Selection', 15: 'Named Entity', 16: 'Number', 17: 'Omitted Auxiliary Verb', 18: 'Omitted Conjunction', 19: 'Omitted Determiner', 20: 'Omitted Preposition', 21: 'Omitted Pronoun', 22: 'Orthography', 23: 'Other POS Omitted', 24: 'Over-translation', 25: 'Overly Literal', 26: 'POS', 27: 'Punctuation', 28: "Shouldn't Have Been Translated", 29: "Shouldn't have been translated", 30: 'Spelling', 31: 'Tense/Mood/Aspect', 32: 'Under-translation', 33: 'Unidiomatic', 34: 'Unintelligible', 35: 'Unit Conversion', 36: 'Untranslated', 37: 'Whitespace', 38: 'Word Order', 39: 'Wrong Auxiliary Verb', 40: 'Wrong Conjunction', 41: 'Wrong Determiner', 42: 'Wrong Language Variety', 43: 'Wrong Preposition', 44: 'Wrong Pronoun' } ``` ### Data Splits The dataset contains 1,448 documents for training, 200 documents for validation and 180 for (blind) test (all English-French). ## Dataset Creation ### Curation Rationale The data is dervied from the [Amazon Product Reviews dataset](http://jmcauley.ucsd.edu/data/amazon/). ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown ### Citation Information ``` Not available. ``` ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
wmt_t2t
2023-04-05T13:44:08.000Z
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:10M<n<100M", "source_datasets:extended|europarl_bilingual", "source_datasets:extended|news_commentary", "source_datasets:extended|opus_paracrawl", "source_d...
null
null
@InProceedings{bojar-EtAl:2014:W14-33, author = {Bojar, Ondrej and Buck, Christian and Federmann, Christian and Haddow, Barry and Koehn, Philipp and Leveling, Johannes and Monz, Christof and Pecina, Pavel and Post, Matt and Saint-Amand, Herve and Soricut, Radu and Specia, Lucia and Tamchyna, Ale\v{s}}, title = {Findings of the 2014 Workshop on Statistical Machine Translation}, booktitle = {Proceedings of the Ninth Workshop on Statistical Machine Translation}, month = {June}, year = {2014}, address = {Baltimore, Maryland, USA}, publisher = {Association for Computational Linguistics}, pages = {12--58}, url = {http://www.aclweb.org/anthology/W/W14/W14-3302} }
null
0
3
--- annotations_creators: - no-annotation language_creators: - found language: - de - en license: - unknown multilinguality: - translation size_categories: - 10M<n<100M source_datasets: - extended|europarl_bilingual - extended|news_commentary - extended|opus_paracrawl - extended|un_multi task_categories: - translation task_ids: [] pretty_name: WMT T2T paperswithcode_id: null dataset_info: features: - name: translation dtype: translation: languages: - de - en config_name: de-en splits: - name: train num_bytes: 1385110179 num_examples: 4592289 - name: validation num_bytes: 736415 num_examples: 3000 - name: test num_bytes: 777334 num_examples: 3003 download_size: 1728762345 dataset_size: 1386623928 --- # Dataset Card for "wmt_t2t" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/translate_ende.py](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/translate_ende.py) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.73 GB - **Size of the generated dataset:** 1.39 GB - **Total amount of disk used:** 3.11 GB ### Dataset Summary The WMT EnDe Translate dataset used by the Tensor2Tensor library. Translation dataset based on the data from statmt.org. Versions exist for different years using a combination of data sources. The base `wmt` allows you to create a custom dataset by choosing your own data/language pair. This can be done as follows: ```python from datasets import inspect_dataset, load_dataset_builder inspect_dataset("wmt_t2t", "path/to/scripts") builder = load_dataset_builder( "path/to/scripts/wmt_utils.py", language_pair=("fr", "de"), subsets={ datasets.Split.TRAIN: ["commoncrawl_frde"], datasets.Split.VALIDATION: ["euelections_dev2019"], }, ) # Standard version builder.download_and_prepare() ds = builder.as_dataset() # Streamable version ds = builder.as_streaming_dataset() ``` ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### de-en - **Size of downloaded dataset files:** 1.73 GB - **Size of the generated dataset:** 1.39 GB - **Total amount of disk used:** 3.11 GB An example of 'validation' looks as follows. ``` { "translation": { "de": "Just a test sentence.", "en": "Just a test sentence." } } ``` ### Data Fields The data fields are the same among all splits. #### de-en - `translation`: a multilingual `string` variable, with possible languages including `de`, `en`. ### Data Splits |name | train |validation|test| |-----|------:|---------:|---:| |de-en|4592289| 3000|3003| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{bojar-EtAl:2014:W14-33, author = {Bojar, Ondrej and Buck, Christian and Federmann, Christian and Haddow, Barry and Koehn, Philipp and Leveling, Johannes and Monz, Christof and Pecina, Pavel and Post, Matt and Saint-Amand, Herve and Soricut, Radu and Specia, Lucia and Tamchyna, Ale {s}}, title = {Findings of the 2014 Workshop on Statistical Machine Translation}, booktitle = {Proceedings of the Ninth Workshop on Statistical Machine Translation}, month = {June}, year = {2014}, address = {Baltimore, Maryland, USA}, publisher = {Association for Computational Linguistics}, pages = {12--58}, url = {http://www.aclweb.org/anthology/W/W14/W14-3302} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
yoruba_text_c3
2023-06-16T15:06:58.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:y...
null
Yoruba Text C3 is the largest Yoruba texts collected and used to train FastText embeddings in the YorubaTwi Embedding paper: https://www.aclweb.org/anthology/2020.lrec-1.335/
@inproceedings{alabi-etal-2020-massive, title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of Yoruba and {T}wi", author = "Alabi, Jesujoba and Amponsah-Kaakyire, Kwabena and Adelani, David and Espa{\\~n}a-Bonet, Cristina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.335", pages = "2754--2762", language = "English", ISBN = "979-10-95546-34-4", }
null
1
3
--- annotations_creators: - expert-generated language_creators: - found language: - yo license: - cc-by-nc-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: Yorùbá Text C3 dataset_info: - config_name: plain_text features: - name: text dtype: string splits: - name: train num_bytes: 77094396 num_examples: 562238 download_size: 75407454 dataset_size: 77094396 - config_name: yoruba_text_c3 features: - name: text dtype: string splits: - name: train num_bytes: 77094396 num_examples: 562238 download_size: 75407454 dataset_size: 77094396 --- # Dataset Card for Yorùbá Text C3 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/ajesujoba/YorubaTwi-Embedding/ - **Paper:** https://aclanthology.org/2020.lrec-1.335/ - **Leaderboard:** - **Point of Contact:** [Jesujoba Alabi](mailto:alabijesujoba@gmail.com) ### Dataset Summary Yorùbá Text C3 was collected from various sources from the web (Bible, JW300, books, news articles, wikipedia, etc) to compare pre-trained word embeddings (Fasttext and BERT) and embeddings and embeddings trained on curated Yorùbá Texts. The dataset consists of clean texts (i.e texts with proper Yorùbá diacritics) like the Bible & JW300 and noisy texts ( with incorrect or absent diacritics) from other online sources like Wikipedia, BBC Yorùbá, and VON Yorùbá ### Supported Tasks and Leaderboards For training word embeddings and language models on Yoruba texts. ### Languages The language supported is Yorùbá. ## Dataset Structure ### Data Instances A data point is a sentence in each line. { 'text': 'lílo àkàbà — ǹjẹ́ o máa ń ṣe àyẹ̀wò wọ̀nyí tó lè dáàbò bò ẹ́' } ### Data Fields - `text`: a `string` feature. a sentence text per line ### Data Splits Contains only the training split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Yorùbá. ### Source Data #### Initial Data Collection and Normalization The dataset comes from various sources of the web like Bible, JW300, books, news articles, wikipedia, etc. See Table 1 in the [paper](https://www.aclweb.org/anthology/2020.lrec-1.335/) for the summary of the dataset and statistics #### Who are the source language producers? [Jehovah Witness](https://www.jw.org/yo/) (JW300) [Yorùbá Bible](http://www.bible.com/) [Yorùbá Wikipedia](dumps.wikimedia.org/yowiki) [BBC Yorùbá](bbc.com/yoruba) [VON Yorùbá](https://von.gov.ng/) [Global Voices Yorùbá]( yo.globalvoices.org) And other sources, see https://www.aclweb.org/anthology/2020.lrec-1.335/ ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The dataset is biased to the religion domain (Christianity) because of the inclusion of JW300 and the Bible. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The data sets were curated by Jesujoba Alabi and David Adelani, students of Saarland University, Saarbrücken, Germany . ### Licensing Information The data is under the [Creative Commons Attribution-NonCommercial 4.0 ](https://creativecommons.org/licenses/by-nc/4.0/legalcode) ### Citation Information ``` @inproceedings{alabi-etal-2020-massive, title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\`u}b{\'a} and {T}wi", author = "Alabi, Jesujoba and Amponsah-Kaakyire, Kwabena and Adelani, David and Espa{\~n}a-Bonet, Cristina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.335", pages = "2754--2762", abstract = "The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yor{\`u}b{\'a} and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yor{\`u}b{\'a} and Twi. We extend the analysis to contextual word embeddings and evaluate multilingual BERT on a named entity recognition task. For this, we annotate with named entities the Global Voices corpus for Yor{\`u}b{\'a}. As output of the work, we provide corpora, embeddings and the test suits for both languages.", language = "English", ISBN = "979-10-95546-34-4", } ``` ### Contributions Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
ASCCCCCCCC/amazon_zh
2022-02-17T02:16:59.000Z
[ "license:apache-2.0", "region:us" ]
ASCCCCCCCC
null
null
null
1
3
--- license: apache-2.0 --- this is a datasets about amazon reviews
Akila/ForgottenRealmsWikiDataset
2022-12-18T12:28:34.000Z
[ "region:us" ]
Akila
null
null
null
2
3
## Citing this work @inproceedings{peiris2022synthesis, title={{Synthesis and Evaluation of a Domain-specific Large Data Set for Dungeons \& Dragons}}, author={Akila Peiris and Nisansa de Silva}, booktitle={Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation}, pages={to appear}, year={2022} }
adorkin/extended_tweet_emojis
2023-02-07T12:18:57.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "region:us" ]
adorkin
null
null
null
1
3
--- task_categories: - text-classification language: - en size_categories: - 10K<n<100K --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is comprised of `emoji` and `emotion` subsets of [tweet_eval](https://huggingface.co/datasets/tweet_eval). The motivation is that the original `emoji` subset essentially contains only positive/neutral emojis, while `emotion` subset contains a varied array of emotions. So, the idea was to replace emotion labels with corresponding emojis (sad, angry) in the `emotion` subset and mix it together with the `emoji` subset. ### Supported Tasks and Leaderboards Similar to tweet eval the expected usage is text classification. ### Languages Only English is present in the dataset. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations Refer to [tweet_eval](https://huggingface.co/datasets/tweet_eval). No additional data was added. #### Annotation process Same as tweet eval. #### Who are the annotators? Same as tweet eval. ### Personal and Sensitive Information Same as tweet eval. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
AlekseyKorshuk/comedy-scripts
2022-02-11T14:50:39.000Z
[ "region:us" ]
AlekseyKorshuk
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
1
3
Entry not found
GEM/ART
2022-10-24T13:01:25.000Z
[ "task_categories:other", "annotations_creators:automatically-created", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:apache-2.0", "reasoning", "arxiv:1908.05739", "arxiv:1906.05317", "region:us" ]
GEM
the Abductive Natural Language Generation Dataset from AI2
@InProceedings{anli, author = {Chandra, Bhagavatula and Ronan, Le Bras and Chaitanya, Malaviya and Keisuke, Sakaguchi and Ari, Holtzman and Hannah, Rashkin and Doug, Downey and Scott, Wen-tau Yih and Yejin, Choi}, title = {Abductive Commonsense Reasoning}, year = {2020} }
null
3
3
--- annotations_creators: - automatically-created language_creators: - unknown language: - en license: - apache-2.0 multilinguality: - unknown size_categories: - unknown source_datasets: - original task_categories: - other task_ids: [] pretty_name: ART tags: - reasoning --- # Dataset Card for GEM/ART ## Dataset Description - **Homepage:** http://abductivecommonsense.xyz/ - **Repository:** https://storage.googleapis.com/ai2-mosaic/public/abductive-commonsense-reasoning-iclr2020/anlg.zip - **Paper:** https://openreview.net/pdf?id=Byg1v1HKDB - **Leaderboard:** N/A - **Point of Contact:** Chandra Bhagavatulla ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/ART). ### Dataset Summary Abductive reasoning is inference to the most plausible explanation. For example, if Jenny finds her house in a mess when she returns from work, and remembers that she left a window open, she can hypothesize that a thief broke into her house and caused the mess, as the most plausible explanation. This data loader focuses on abductive NLG: a conditional English generation task for explaining given observations in natural language. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/ART') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/ART). #### website [Website](http://abductivecommonsense.xyz/) #### paper [OpenReview](https://openreview.net/pdf?id=Byg1v1HKDB) #### authors Chandra Bhagavatula (AI2), Ronan Le Bras (AI2), Chaitanya Malaviya (AI2), Keisuke Sakaguchi (AI2), Ari Holtzman (AI2, UW), Hannah Rashkin (AI2, UW), Doug Downey (AI2), Wen-tau Yih (AI2), Yejin Choi (AI2, UW) ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Website](http://abductivecommonsense.xyz/) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Google Storage](https://storage.googleapis.com/ai2-mosaic/public/abductive-commonsense-reasoning-iclr2020/anlg.zip) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [OpenReview](https://openreview.net/pdf?id=Byg1v1HKDB) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{ Bhagavatula2020Abductive, title={Abductive Commonsense Reasoning}, author={Chandra Bhagavatula and Ronan Le Bras and Chaitanya Malaviya and Keisuke Sakaguchi and Ari Holtzman and Hannah Rashkin and Doug Downey and Wen-tau Yih and Yejin Choi}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=Byg1v1HKDB} } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Chandra Bhagavatulla #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> chandrab@allenai.org #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English` #### Whose Language? <!-- info: Whose language is in the dataset? --> <!-- scope: periscope --> Crowdworkers on the Amazon Mechanical Turk platform based in the U.S, Canada, U.K and Australia. #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> apache-2.0: Apache License 2.0 #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> To study the viability of language-based abductive reasoning. Training and evaluating models to generate a plausible hypothesis to explain two given observations. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Reasoning ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `industry` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> Allen Institute for AI #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Chandra Bhagavatula (AI2), Ronan Le Bras (AI2), Chaitanya Malaviya (AI2), Keisuke Sakaguchi (AI2), Ari Holtzman (AI2, UW), Hannah Rashkin (AI2, UW), Doug Downey (AI2), Wen-tau Yih (AI2), Yejin Choi (AI2, UW) #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> Allen Institute for AI #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Chandra Bhagavatula (AI2), Ronan LeBras (AI2), Aman Madaan (CMU), Nico Daheim (RWTH Aachen University) ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> - `observation_1`: A string describing an observation / event. - `observation_2`: A string describing an observation / event. - `label`: A string that plausibly explains why observation_1 and observation_2 might have happened. #### How were labels chosen? <!-- info: How were the labels chosen? --> <!-- scope: microscope --> Explanations were authored by crowdworkers on the Amazon Mechanical Turk platform using a custom template designed by the creators of the dataset. #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { 'gem_id': 'GEM-ART-validation-0', 'observation_1': 'Stephen was at a party.', 'observation_2': 'He checked it but it was completely broken.', 'label': 'Stephen knocked over a vase while drunk.' } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> - `train`: Consists of training instances. - `dev`: Consists of dev instances. - `test`: Consists of test instances. ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> Abductive reasoning is a crucial capability of humans and ART is the first dataset curated to study language-based abductive reasoning. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> no #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> Whether models can reason abductively about a given pair of observations. ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> no #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> - [Paper](https://arxiv.org/abs/1908.05739) - [Code](https://github.com/allenai/abductive-commonsense-reasoning) ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Whether models can reason abductively about a given pair of observations. #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `BLEU`, `BERT-Score`, `ROUGE` #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> no ## Dataset Curation ### Original Curation #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> no ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Crowdsourced` #### Where was it crowdsourced? <!-- info: If crowdsourced, where from? --> <!-- scope: periscope --> `Amazon Mechanical Turk` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> Language producers were English speakers in U.S., Canada, U.K and Australia. #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> No #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> validated by crowdworker #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> algorithmically #### Filter Criteria <!-- info: What were the selection criteria? --> <!-- scope: microscope --> Adversarial filtering algorithm as described in the [paper](https://arxiv.org/abs/1908.05739) ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> automatically created #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no #### Annotation Values <!-- info: Purpose and values for each annotation --> <!-- scope: microscope --> Each observation is associated with a list of COMET (https://arxiv.org/abs/1906.05317) inferences. #### Any Quality Control? <!-- info: Quality control measures? --> <!-- scope: telescope --> none ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> no PII #### Justification for no PII <!-- info: Provide a justification for selecting `no PII` above. --> <!-- scope: periscope --> The dataset contains day-to-day events. It does not contain names, emails, addresses etc. ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> no ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> no ## Considerations for Using the Data ### PII Risks and Liability #### Potential PII Risk <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. --> <!-- scope: microscope --> None ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `public domain` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `public domain` ### Known Technical Limitations
Graphcore/vqa-lxmert
2022-10-25T08:59:34.000Z
[ "language:en", "license:cc-by-4.0", "region:us" ]
Graphcore
VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.
@inproceedings{antol2015vqa, title={Vqa: Visual question answering}, author={Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C Lawrence and Parikh, Devi}, booktitle={Proceedings of the IEEE international conference on computer vision}, pages={2425--2433}, year={2015} }
null
0
3
--- language: - en license: - cc-by-4.0 ---
GroNLP/ik-nlp-22_pestyle
2022-10-25T09:06:27.000Z
[ "task_categories:translation", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:it", "license:other", "region:us" ]
GroNLP
This dataset contains a sample of sentences taken from the FLORES-101 dataset that were either translated from scratch or post-edited from an existing automatic translation by three human translators. Translation were performed for the English-Italian language pair, and translators' behavioral data (keystrokes, pauses, editing times) were collected using the PET platform.
No citation information available.
null
0
3
--- annotations_creators: - machine-generated - expert-generated language_creators: - found language: - en - it license: - other multilinguality: - translation size_categories: - 1K<n<10K source_datasets: - original task_categories: - translation pretty_name: iknlp22-pestyle --- # Dataset Card for IK-NLP-22 Project 1: A Study in Post-Editing Stylometry ## Table of Contents - [Dataset Card for IK-NLP-22 Project 1: A Study in Post-Editing Stylometry](#dataset-card-for-ik-nlp-22-project-1-a-study-in-post-editing-stylometry) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Train Split](#train-split) - [Test splits](#test-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Source:** [FLORES-101](https://huggingface.co/datasets/gsarti/flores_101) - **Point of Contact:** [Gabriele Sarti](mailto:ik-nlp-course@rug.nl) ### Dataset Summary This dataset contains a sample of sentences taken from the [FLORES-101](https://huggingface.co/datasets/gsarti/flores_101) dataset that were either translated from scratch or post-edited from an existing automatic translation by three human translators. Translation were performed for the English-Italian language pair, and translators' behavioral data (keystrokes, pauses, editing times) were collected using the [PET](https://github.com/wilkeraziz/PET) platform. This dataset is made available for final projects of the 2022 edition of the Natural Language Processing course at the [Information Science Master's Degree](https://www.rug.nl/masters/information-science/?lang=en) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) and [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti) with the assistance of [Anjali Nair](https://nl.linkedin.com/in/anjalinair012). **Disclaimer**: *This repository is provided without direct data access due to currently unpublished results.* _**For this reason, it is strictly forbidden to share or publish all the data associated to this repository**_. *Students will be provided with a compressed folder containing the data upon choosing a project based on this dataset. To load the dataset using 🤗 Datasets, download and unzip the provided folder and pass it to the* `load_dataset` *method as:* `datasets.load_dataset('GroNLP/ik-nlp-22_pestyle', 'full', data_dir='path/to/unzipped/folder')` ### Languages The language data of is in English (BCP-47 `en`) and Italian (BCP-47 `it`) ## Dataset Structure ### Data Instances The dataset contains four configurations: `full`, `test_mask_subject`, `test_mask_modality`, `test_mask_time`. `full` contains the main `train` split in which all fields are available. The other three, `test_mask_subject`, `test_mask_modality`, `test_mask_time`, contain a `test` split each with different fields removed to avoid information leaking during evaluation. See more details in the [Data Splits](#data-splits) section. ### Data Fields The following fields are contained in the training set: |Field|Description| |-----|-----------| |`item_id` | The sentence identifier. The first digits of the number represent the document containing the sentence, while the last digit of the number represents the sentence position inside the document. Documents can contain from 3 to 5 semantically-related sentences each. | |`subject_id` | The identifier for the translator performing the translation from scratch or post-editing task. Values: `t1`, `t2` or `t3`. | |`modality` | The modality of the translation task. Values: `ht` (translation from scratch), `pe1` (post-editing Google Translate translations), `pe2` (post-editing [mBART](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) translations). | |`src_text` | The original source sentence extracted from Wikinews, wikibooks or wikivoyage. | |`mt_text` | Missing if tasktype is `ht`. Otherwise, contains the automatically-translated sentence before post-editing. | |`tgt_text` | Final sentence produced by the translator (either via translation from scratch of `sl_text` or post-editing `mt_text`) | |`edit_time` | Total editing time for the translation in seconds. | |`k_total` | Total number of keystrokes for the translation. | |`k_letter` | Total number of letter keystrokes for the translation. | |`k_digit` | Total number of digit keystrokes for the translation. | |`k_white` | Total number of whitespace keystrokes for the translation. | |`k_symbol` | Total number of symbol (punctuation, etc.) keystrokes for the translation. | |`k_nav` | Total number of navigation keystrokes (left-right arrows, mouse clicks) for the translation. | |`k_erase` | Total number of erase keystrokes (backspace, cancel) for the translation. | |`k_copy` | Total number of copy (Ctrl + C) actions during the translation. | |`k_cut` | Total number of cut (Ctrl + X) actions during the translation. | |`k_paste` | Total number of paste (Ctrl + V) actions during the translation. | |`n_pause_geq_300` | Number of pauses of 300ms or more during the translation. | |`len_pause_geq_300` | Total duration of pauses of 300ms or more, in milliseconds. | |`n_pause_geq_1000` | Number of pauses of 1s or more during the translation. | |`len_pause_geq_1000` | Total duration of pauses of 1000ms or more, in milliseconds. | |`num_annotations` | Number of times the translator focused the texbox for performing the translation of the sentence during the translation session. E.g. 1 means the translation was performed once and never revised. | |`n_insert` | Number of post-editing insertions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. | |`n_delete` | Number of post-editing deletions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. | |`n_substitute` | Number of post-editing substitutions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. | |`n_shift` | Number of post-editing shifts (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. | |`bleu` | Sentence-level BLEU score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. | |`chrf` | Sentence-level chrF score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. | |`ter` | Sentence-level TER score between MT and post-edited fields (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. | |`aligned_edit` | Aligned visual representation of REF (`mt_text`), HYP (`tl_text`) and edit operations (I = Insertion, D = Deletion, S = Substitution) performed on the field. Replace `\\n` with `\n` to show the three aligned rows.| ### Data Splits | config| train| test| |------:|-----:|----:| |`main` | 1170 | 120 | #### Train Split The `train` split contains a total of 1170 triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation. The following is an example of the subject `t3` post-editing a machine translation produced by system 2 (tasktype `pe2`) taken from the `train` split. The field `aligned_edit` is showed over three lines to provide a visual understanding of its contents. ```json { "item_id": 1072, "subject_id": "t3", "tasktype": "pe2", "src_text": "At the beginning dress was heavily influenced by the Byzantine culture in the east.", "mt_text": "All'inizio il vestito era fortemente influenzato dalla cultura bizantina dell'est.", "tgt+text": "Inizialmente, l'abbigliamento era fortemente influenzato dalla cultura bizantina orientale.", "edit_time": 45.687, "k_total": 51, "k_letter": 31, "k_digit": 0, "k_white": 2, "k_symbol": 3, "k_nav": 7, "k_erase": 3, "k_copy": 0, "k_cut": 0, "k_paste": 0, "n_pause_geq_300": 9, "len_pause_geq_300": 40032, "n_pause_geq_1000": 5, "len_pause_geq_1000": 38392, "num_annotations": 1, "n_insert": 0.0, "n_delete": 1.0, "n_substitute": 3.0, "n_shift": 0.0, "bleu": 47.99, "chrf": 62.05, "ter": 40.0, "aligned_edit: "REF: all'inizio il vestito era fortemente influenzato dalla cultura bizantina dell'est.\\n HYP: ********** inizialmente, l'abbigliamento era fortemente influenzato dalla cultura bizantina orientale.\\n EVAL: D S S S" } ``` The text is provided as-is, without further preprocessing or tokenization. #### Test splits The three `test` splits (one per configuration) contain the same 120 entries each, following the same structure as `train`. Each test split omit some of the fields to prevent leakage of information: - In `test_mask_subject` the `subject_id` is absent, for the main task of post-editor stylometry. - In `test_mask_modality` the following fields are absent for the modality prediction extra task: `modality`, `mt_text`, `n_insert`, `n_delete`, `n_substitute`, `n_shift`, `ter`, `bleu`, `chrf`, `aligned_edit`. - In `test_mask_time` the following fields are absent for the time and pause prediction extra task: `edit_time`, `n_pause_geq_300`, `len_pause_geq_300`, `n_pause_geq_1000`, and `len_pause_geq_1000`. ### Dataset Creation The dataset was parsed from PET XML files into CSV format using a script adapted from the one by [Antonio Toral](https://research.rug.nl/en/persons/antonio-toral-ruiz) found at the following link: [https://github.com/antot/postediting_novel_frontiers](https://github.com/antot/postediting_novel_frontiers) ## Additional Information ### Dataset Curators For problems related to this 🤗 Datasets version, please contact us at [ik-nlp-course@rug.nl](mailto:ik-nlp-course@rug.nl). ### Licensing Information It is forbidden to share or publish the data associated with this 🤗 Dataset version. ### Citation Information No citation information is provided for this dataset.
Iftoo95/Arabic_Sentiment_and_Topics
2021-11-20T14:50:45.000Z
[ "region:us" ]
Iftoo95
null
null
null
0
3
Arabic Twitter based dataset with multi-labels that contains two classes: 1. Sentiment class: classifies tweets as Positive, Negative and Neutral 2. Topic class: Classifies tweets as Politics, Business and Health
NbAiLab/norec_agg
2022-07-01T19:53:24.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2011.02686", "region:u...
NbAiLab
Aggregated NoRec_fine: A Fine-grained Sentiment Dataset for Norwegian This dataset was created by the Nordic Language Processing Laboratory by aggregating the fine-grained annotations in NoReC_fine and removing sentences with conflicting or no sentiment.
@InProceedings{OvrMaeBar20, author = {Lilja {\O}vrelid and Petter M{\ae}hlum and Jeremy Barnes and Erik Velldal}, title = {A Fine-grained Sentiment Dataset for {N}orwegian}, booktitle = {{Proceedings of the 12th Edition of the Language Resources and Evaluation Conference}}, year = 2020, address = "Marseille, France, 2020" }
null
0
3
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card Creation Guide ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** N/A - **Repository:** [GitHub](https://github.com/ltgoslo/NorBERT/) - **Paper:** [A Fine-grained Sentiment Dataset for Norwegian](https://www.aclweb.org/anthology/2020.lrec-1.618/) - **Leaderboard:** N/A - **Point of Contact:** - ### Dataset Summary Aggregated NoRec_fine: A Fine-grained Sentiment Dataset for Norwegian. This dataset was created by the Nordic Language Processing Laboratory by aggregating the fine-grained annotations in NoReC_fine and removing sentences with conflicting or no sentiment. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in Norwegian. ## Dataset Structure ### Data Instances Example of one instance in the dataset. ```{'label': 0, 'text': 'Verre er det med slagsmålene .'}``` ### Data Fields - `id`: index of the example - `text`: Text of a sentence - `label`: The sentiment label. Here - 0 = negative - 1 = positive ### Data Splits The dataset is split into a `train`, `validation`, and `test` split with the following sizes: | | Tain | Valid | Test | | ----- | ------ | ----- | ----- | | Number of examples | 2675 | 516 | 417 | ## Dataset Creation This dataset is based largely on the original data described in the paper _A Fine-Grained Sentiment Dataset for Norwegian_ by L. Øvrelid, P. Mæhlum, J. Barnes, and E. Velldal, accepted at LREC 2020, [paper available](https://www.aclweb.org/anthology/2020.lrec-1.618). However, we have since added annotations for another 3476 sentences, increasing the overall size and scope of the dataset. ## Additional Information ### Licensing Information This work is licensed under a Creative Commons Attribution 4.0 International License ### Citation Information ```latex @misc{sheng2020investigating, title={Investigating Societal Biases in a Poetry Composition System}, author={Emily Sheng and David Uthus}, year={2020}, eprint={2011.02686}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
SuperAI2-Machima/Yord_ThaiQA_LST20
2022-02-25T06:31:36.000Z
[ "region:us" ]
SuperAI2-Machima
null
null
null
0
3
พี่ยอด และน้อง ๆ ในทีมบ้านมัณิชมา ร่วมกันสร้างชุดข้อมูล คำถาม - คำตอบ จากชุดข้อมูล LST-20 โดยใช้ POS และ NER เพื่อมาสร้างชุดประโยคคำถาม ได้ข้อมูลคำถาม - ตอบ ทั้งหมดประมาณ 1,000 แถว
Tevatron/wikipedia-nq-corpus
2021-10-13T22:18:40.000Z
[ "region:us" ]
Tevatron
null
@inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", }
null
0
3
Entry not found
leey4n/KR3
2023-07-19T08:35:54.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:100K<n<1m", "language:ko", "license:cc-by-nc-sa-4.0", "region:us" ]
leey4n
null
null
null
2
3
--- annotations_creators: [] language_creators: [] language: - ko license: - cc-by-nc-sa-4.0 multilinguality: - monolingual pretty_name: KR3 size_categories: - 100K<n<1m source_datasets: [] task_categories: - text-classification task_ids: - sentiment-classification --- ### KR3: Korean Restaurant Reviews with Ratings Korean sentiment classification dataset - Size: 460K(+180K) - Language: Korean-centric ### ⚠️ Caution with `Rating` Column 0 stands for negative review, 1 stands for positive review, and 2 stands for ambiguous review. **Note that rating 2 is not intended to be used directly for supervised learning(classification).** This data is included for additional pre-training purpose or other usage. In other words, this dataset is basically a **binary** sentiment classification task where labels are 0 and 1. ### 🔍 See More See all the codes for crawling/preprocessing the dataset and experiments with KR3 in [GitHub Repo](https://github.com/Wittgensteinian/kr3). See Kaggle dataset in [Kaggle Dataset](https://www.kaggle.com/ninetyninenewton/kr3-korean-restaurant-reviews-with-ratings). ### Usage ```python from datasets import load_dataset kr3 = load_dataset("leey4n/KR3", name='kr3', split='train') kr3 = kr3.remove_columns(['__index_level_0__']) # Original file didn't include this column. Suspect it's a hugging face issue. ``` ```python # drop reviews with ambiguous label kr3_binary = kr3.filter(lambda example: example['Rating'] != 2) ``` ### License **CC BY-NC-SA 4.0** ### Legal Issues We concluded that the **non-commerical usage and release of KR3 fall into the range of fair use (공정 이용)** stated in the Korean copyright act (저작권법). We further clarify that we **did not agree to the terms of service** from any websites which might prohibit web crawling. In other words, web crawling we've done was proceeded without logging in to the website. Despite all of these, feel free to contact to any of the contributors if you notice any legal issues. ### Contributors & Acknowledgement (Alphabetical order) [Dongin Jung](https://github.com/dongin1009) [Hyunwoo Kwak](https://github.com/Kwak-Hyun-woo) [Kaeun Lee](https://github.com/Kaeun-Lee) [Yejoon Lee](https://github.com/wittgensteinian) This work was done as DIYA 4기. Compute resources needed for the work was supported by [DIYA](https://blog.diyaml.com) and surromind.ai.
YuAnthony/tnews
2022-01-19T09:48:58.000Z
[ "region:us" ]
YuAnthony
null
null
null
0
3
Entry not found
abdusah/masc
2022-07-01T15:28:48.000Z
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:ar", "license:cc-by-nc-4.0", "region:us" ]
abdusah
null
null
null
0
3
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar license: - cc-by-nc-4.0 multilinguality: [] paperswithcode_id: [] pretty_name: 'MASC' size_categories: source_datasets: [] task_categories: [] task_ids: [] --- # Dataset Card for MASC: MASSIVE ARABIC SPEECH CORPUS ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus - **Repository:** - **Paper:** https://dx.doi.org/10.21227/e1qb-jv46 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This corpus is a dataset that contains 1,000 hours of speech sampled at 16~kHz and crawled from over 700 YouTube channels. MASC is multi-regional, multi-genre, and multi-dialect dataset that is intended to advance the research and development of Arabic speech technology with the special emphasis on Arabic speech recognition ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Multi-dialect Arabic ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields #### masc_dev - speech - sampling_rate - target_text (label) ### Data Splits #### masc_dev - train: 100 - test: 40 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information Note: this is a small development set for testing. ### Dataset Curators [More Information Needed] ### Licensing Information CC 4.0 ### Citation Information [More Information Needed] ### Contributions Mohammad Al-Fetyani, Muhammad Al-Barham, Gheith Abandah, Adham Alsharkawi, Maha Dawas, August 18, 2021, "MASC: Massive Arabic Speech Corpus", IEEE Dataport, doi: https://dx.doi.org/10.21227/e1qb-jv46.
allenai/scico
2023-01-10T20:23:18.000Z
[ "task_categories:token-classification", "task_ids:coreference-resolution", "annotations_creators:domain experts", "multilinguality:monolingual", "language:en", "license:apache-2.0", "cross-document-coreference-resolution", "structure-prediction", "region:us" ]
allenai
SciCo is a dataset for hierarchical cross-document coreference resolution over scientific papers in the CS domain.
@inproceedings{ cattan2021scico, title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts}, author={Arie Cattan and Sophie Johnson and Daniel S. Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope}, booktitle={3rd Conference on Automated Knowledge Base Construction}, year={2021}, url={https://openreview.net/forum?id=OFLbgUP04nC} }
null
3
3
--- annotations_creators: - domain experts language: - en license: - apache-2.0 multilinguality: - monolingual task_categories: - token-classification task_ids: - coreference-resolution paperswithcode_id: scico tags: - cross-document-coreference-resolution - structure-prediction --- # Dataset Card for SciCo ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [SciCo homepage](https://scico.apps.allenai.org/) - **Repository:** [SciCo repository](https://github.com/ariecattan/scico) - **Paper:** [SciCo: Hierarchical Cross-document Coreference for Scientific Concepts](https://openreview.net/forum?id=OFLbgUP04nC) - **Point of Contact:** [Arie Cattan](arie.cattan@gmail.com) ### Dataset Summary SciCo consists of clusters of mentions in context and a hierarchy over them. The corpus is drawn from computer science papers, and the concept mentions are methods and tasks from across CS. Scientific concepts pose significant challenges: they often take diverse forms (e.g., class-conditional image synthesis and categorical image generation) or are ambiguous (e.g., network architecture in AI vs. systems research). To build SciCo, we develop a new candidate generation approach built on three resources: a low-coverage KB ([https://paperswithcode.com/](https://paperswithcode.com/)), a noisy hypernym extractor, and curated candidates. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields * `flatten_tokens`: a single list of all tokens in the topic * `flatten_mentions`: array of mentions, each mention is represented by [start, end, cluster_id] * `tokens`: array of paragraphs * `doc_ids`: doc_id of each paragraph in `tokens` * `metadata`: metadata of each doc_id * `sentences`: sentences boundaries for each paragraph in `tokens` [start, end] * `mentions`: array of mentions, each mention is represented by [paragraph_id, start, end, cluster_id] * `relations`: array of binary relations between cluster_ids [parent, child] * `id`: id of the topic * `hard_10` and `hard_20` (only in the test set): flag for 10% or 20% hardest topics based on Levenshtein similarity. * `source`: source of this topic PapersWithCode (pwc), hypernym or curated. ### Data Splits | |Train |Validation|Test | |--------------------|-----:|---------:|----:| |Topic | 221| 100| 200| |Documents | 9013| 4120| 8237| |Mentions | 10925| 4874|10424| |Clusters | 4080| 1867| 3711| |Relations | 2514| 1747| 2379| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was initially created by Arie Cattan, Sophie Johnson, Daniel Weld, Ido Dagan, Iz Beltagy, Doug Downey and Tom Hope, while Arie was intern at Allen Institute of Artificial Intelligence. ### Licensing Information This dataset is distributed under [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @inproceedings{ cattan2021scico, title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts}, author={Arie Cattan and Sophie Johnson and Daniel S. Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope}, booktitle={3rd Conference on Automated Knowledge Base Construction}, year={2021}, url={https://openreview.net/forum?id=OFLbgUP04nC} } ``` ### Contributions Thanks to [@ariecattan](https://github.com/ariecattan) for adding this dataset.
arjunth2001/online_privacy_qna
2021-11-10T08:53:10.000Z
[ "region:us" ]
arjunth2001
null
null
null
2
3
Online Privacy Policy QnA Dataset
lmqg/qg_jaquad
2022-12-02T18:51:27.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:SkelterLabsInc/JaQuAD", "language:ja", "license:cc-by-sa-3.0", "question-generation", "arxiv:2210.03992", "region:us" ]
lmqg
[JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset for question generation (QG) task. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
@inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", }
null
4
3
--- license: cc-by-sa-3.0 pretty_name: JaQuAD for question generation language: ja multilinguality: monolingual size_categories: 10K<n<100K source_datasets: SkelterLabsInc/JaQuAD task_categories: - text-generation task_ids: - language-modeling tags: - question-generation --- # Dataset Card for "lmqg/qg_jaquad" ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992). This is [JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset compiled for question generation (QG) task. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set. There are no overlap in terms of the paragraph across train, test, and validation split. ### Supported Tasks and Leaderboards * `question-generation`: The dataset is assumed to be used to train a model for question generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail). ### Languages Japanese (ja) ## Dataset Structure An example of 'train' looks as follows. ``` { "question": "新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?", "paragraph": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。", "answer": "保守費用", "sentence": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。", "paragraph_sentence": "<hl>三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。<hl>新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。", "paragraph_answer": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、<hl>保守費用<hl>を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。", "sentence_answer": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、<hl>保守費用<hl>を抑えた新型車両として6000系が構想された。" } ``` The data fields are the same among all splits. - `question`: a `string` feature. - `paragraph`: a `string` feature. - `answer`: a `string` feature. - `sentence`: a `string` feature. - `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`. - `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`. - `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`. Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model, but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and `paragraph_sentence` feature is for sentence-aware question generation. ## Data Splits |train|validation|test | |----:|---------:|----:| |27809| 3939| 3939| ## Citation Information ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
aseifert/merlin
2022-10-21T16:21:58.000Z
[ "multilinguality:translation", "size_categories:unknown", "language:cz", "language:de", "language:it", "region:us" ]
aseifert
null
null
null
1
3
--- annotations_creators: [] language_creators: [] language: - cz - de - it license: [] multilinguality: - translation pretty_name: merlin size_categories: - unknown source_datasets: [] task_categories: - conditional-text-generation task_ids: - machine-translation --- # MERLIN corpus Project URL: https://merlin-platform.eu/C_mcorpus.php Dataset URL: https://clarin.eurac.edu/repository/xmlui/handle/20.500.12124/6 The MERLIN corpus is a written learner corpus for Czech, German, and Italian that has been designed to illustrate the Common European Framework of Reference for Languages (CEFR) with authentic learner data. The corpus contains learner texts produced in standardized language certifications covering CEFR levels A1-C1. The MERLIN annotation scheme includes a wide range of language characteristics that provide researchers with concrete examples of learner performance and progress across multiple proficiency levels.
bhavnicksm/sentihood
2022-10-25T09:07:23.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1610.03771", ...
bhavnicksm
null
null
null
3
3
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-4.0 multilinguality: - monolingual pretty_name: SentiHood Dataset size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification - multi-class-classification - natural-language-inference --- # Dataset Card for [SentiHood] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Paper:** https://arxiv.org/abs/1610.03771 - **Leaderboard:** https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-sentihood ### Dataset Summary Created as a part of the paper "SentiHood: Targeted Aspect Based Sentiment Analysis Dataset for Urban Neighbourhoods" by Saeidi et al. #### Abstract In this paper, we introduce the task of targeted aspect-based sentiment analysis. The goal is to extract fine-grained information with respect to entities mentioned in user comments. This work extends both aspect-based sentiment analysis that assumes a single entity per document and targeted sentiment analysis that assumes a single sentiment towards a target entity. In particular, we identify the sentiment towards each aspect of one or more entities. As a testbed for this task, we introduce the SentiHood dataset, extracted from a question answering (QA) platform where urban neighborhoods are discussed by users. In this context units of text often mention several aspects of one or more neighborhoods. This is the first time that a generic social media platform in this case a QA platform, is used for fine-grained opinion mining. Text coming from QA platforms is far less constrained compared to text from review-specific platforms on which current datasets are based. We develop several strong baselines, relying on logistic regression and state-of-the-art recurrent neural networks. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Monolingual (only English) ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@Bhavnicksm](https://github.com/Bhavnicksm) for adding this dataset.
cassandra-themis/QR-AN
2022-10-24T20:31:22.000Z
[ "task_categories:summarization", "task_categories:text-classification", "task_categories:text-generation", "task_ids:multi-class-classification", "task_ids:topic-classification", "size_categories:10K<n<100K", "language:fr", "conditional-text-generation", "region:us" ]
cassandra-themis
QR-AN Dataset: a classification dataset on french Parliament debates This is a dataset for theme/topic classification, made of questions and answers from https://www2.assemblee-nationale.fr/recherche/resultats_questions. It contains 188 unbalanced classes, 80k questions-answers divided into 3 splits: train (60k), val (10k) and test (10k).
null
null
2
3
--- language: - fr size_categories: 10K<n<100K task_categories: - summarization - text-classification - text-generation task_ids: - multi-class-classification - topic-classification tags: - conditional-text-generation --- **QR-AN Dataset: a classification and generation dataset of french Parliament questions-answers.** This is a dataset for theme/topic classification, made of questions and answers from https://www2.assemblee-nationale.fr/recherche/resultats_questions . \ It contains 188 unbalanced classes, 80k questions-answers divided into 3 splits: train (60k), val (10k) and test (10k). \ Can be used for generation with 'qran_generation' This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable: ```python "ccdv/cass-summarization": ("question", "answer") ``` Compatible with [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) script: ``` export MODEL_NAME=camembert-base export MAX_SEQ_LENGTH=512 python run_glue.py \ --model_name_or_path $MODEL_NAME \ --dataset_name cassandra-themis/QR-AN \ --do_train \ --do_eval \ --max_seq_length $MAX_SEQ_LENGTH \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 4 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --max_eval_samples 500 \ --output_dir tmp/QR-AN ```
chenghao/scielo_books
2022-07-01T18:34:59.000Z
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:en", "language:pt", "language:es", "license:cc-by-nc-sa-3.0", "region:us" ]
chenghao
null
null
null
0
3
--- annotations_creators: - no-annotation language_creators: - found language: - en - pt - es license: - cc-by-nc-sa-3.0 multilinguality: - multilingual paperswithcode_id: null size_categories: - n<1K source_datasets: - original task_categories: - sequence-modeling task_ids: - language-modeling --- ## Dataset Description - **Homepage:** [scielo.org](https://search.livros.scielo.org/search/?fb=&where=BOOK&filter%5Bis_comercial_filter%5D%5B%5D=f) ### Dataset Summary This dataset contains all text from open-access PDFs on [scielo.org](https://search.livros.scielo.org/search/?fb=&where=BOOK&filter%5Bis_comercial_filter%5D%5B%5D=f). As of Dec. 5 2021, the total number of books available is 962. Some of them are not in native PDF format (e.g. scanned images) though. ### Supported Tasks and Leaderboards - `sequence-modeling` or `language-modeling`: The dataset can be used to train a language model. ### Languages As of Dec. 5 2021, there are 902 books in Portuguese, 55 in Spanish, and 5 in English. ## Dataset Structure ### Data Instances Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples. ``` { "sbid":"23pcw", "id":"23pcw", "shortname":"", "title":"Educa\u00e7\u00e3o, sa\u00fade e esporte: novos\tdesafios \u00e0 Educa\u00e7\u00e3o F\u00edsica", "eisbn":"9788574554907", "isbn":"9788574554273", "author":"Farias, Gelcemar Oliveira; Nascimento, Juarez Vieira do", "corporate_authors":"", "translators":"", "coordinators":"", "editors":"", "others":"", "organizers":"", "collaborators":"", "publisher":"Editus", "language":"pt", "year": 2016, "synopsis":"\"A colet\u00e2nea contempla cap\u00edtulos que discutem a Educa\u00e7\u00e3o F\u00edsica a partir dos pressupostos da Educa\u00e7\u00e3o, da Sa\u00fade e do Esporte, enquanto importante desafio do momento atual e diante dos avan\u00e7os e das mudan\u00e7as que se consolidaram na forma\u00e7\u00e3o inicial em Educa\u00e7\u00e3o F\u00edsica. A obra convida a todos para a realiza\u00e7\u00e3o de futuras investiga\u00e7\u00f5es, no sentido de concentrar esfor\u00e7os para o fortalecimento de n\u00facleos de estudos e a sistematiza\u00e7\u00e3o de linhas de pesquisa.\"", "format":"", "type":"book", "is_public":"true", "is_comercial":"false", "publication_date":"2018-11-07", "_version_":"1718206093473087488", "pdf_url":"http://books.scielo.org//id/23pcw/pdf/farias-9788574554907.pdf", "pdf_filename":"farias-9788574554907.pdf", "metadata_filename":"farias-9788574554907.json", "text":"..." } ``` ### Data Fields All fields are of string type except `year`. ### Data Splits All records are in the default `train` split. ## Dataset Creation ### Curation Rationale Part of the big science efforts to create lanague modeling datasets. ### Source Data [scielo.org](https://search.livros.scielo.org/search/?fb=&where=BOOK&filter%5Bis_comercial_filter%5D%5B%5D=f) #### Initial Data Collection and Normalization All PDFs are directly downloaded from the website and text is extracted with [pdftotext](https://pypi.org/project/pdftotext/) lib. #### Who are the source language producers? NA ### Annotations No annotation is available. #### Annotation process NA #### Who are the annotators? NA ### Personal and Sensitive Information NA ## Considerations for Using the Data ### Social Impact of Dataset NA ### Discussion of Biases NA ### Other Known Limitations NA ## Additional Information ### Dataset Curators [@chenghao](https://huggingface.co/chenghao) ### Licensing Information Provide the license and link to the license webpage if available. [CC BY-NC-SA 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0/) ### Contributions NA
csebuetnlp/xnli_bn
2022-08-21T13:14:56.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended", "language:bn", "license:cc-by-nc-sa-4.0", "arxiv:2101.00204", ...
csebuetnlp
This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of MNLI data used in XNLI and state-of-the-art English to Bengali translation model.
@misc{bhattacharjee2021banglabert, title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding}, author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar}, year={2021}, eprint={2101.00204}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
1
3
--- annotations_creators: - machine-generated language_creators: - found multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended task_categories: - text-classification task_ids: - natural-language-inference language: - bn license: - cc-by-nc-sa-4.0 --- # Dataset Card for `xnli_bn` ## Table of Contents - [Dataset Card for `xnli_bn`](#dataset-card-for-xnli_bn) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Usage](#usage) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert) - **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204) - **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd) ### Dataset Summary This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of MNLI data used in XNLI and state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).** ### Supported Tasks and Leaderboards [More information needed](https://github.com/csebuetnlp/banglabert) ### Languages * `Bengali` ### Usage ```python from datasets import load_dataset dataset = load_dataset("csebuetnlp/xnli_bn") ``` ## Dataset Structure ### Data Instances One example from the dataset is given below in JSON format. ``` { "sentence1": "আসলে, আমি এমনকি এই বিষয়ে চিন্তাও করিনি, কিন্তু আমি এত হতাশ হয়ে পড়েছিলাম যে, শেষ পর্যন্ত আমি আবার তার সঙ্গে কথা বলতে শুরু করেছিলাম", "sentence2": "আমি তার সাথে আবার কথা বলিনি।", "label": "contradiction" } ``` ### Data Fields The data fields are as follows: - `sentence1`: a `string` feature indicating the premise. - `sentence2`: a `string` feature indicating the hypothesis. - `label`: a classification label, where possible values are `contradiction` (0), `entailment` (1), `neutral` (2) . ### Data Splits | split |count | |----------|--------| |`train`| 381449 | |`validation`| 2419 | |`test`| 4895 | ## Dataset Creation The dataset curation procedure was the same as the [XNLI](https://aclanthology.org/D18-1269/) dataset: we translated the [MultiNLI](https://aclanthology.org/N18-1101/) training data using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. All sentences below a similarity threshold of 0.70 were discarded. ### Curation Rationale [More information needed](https://github.com/csebuetnlp/banglabert) ### Source Data [XNLI](https://aclanthology.org/D18-1269/) #### Initial Data Collection and Normalization [More information needed](https://github.com/csebuetnlp/banglabert) #### Who are the source language producers? [More information needed](https://github.com/csebuetnlp/banglabert) ### Annotations [More information needed](https://github.com/csebuetnlp/banglabert) #### Annotation process [More information needed](https://github.com/csebuetnlp/banglabert) #### Who are the annotators? [More information needed](https://github.com/csebuetnlp/banglabert) ### Personal and Sensitive Information [More information needed](https://github.com/csebuetnlp/banglabert) ## Considerations for Using the Data ### Social Impact of Dataset [More information needed](https://github.com/csebuetnlp/banglabert) ### Discussion of Biases [More information needed](https://github.com/csebuetnlp/banglabert) ### Other Known Limitations [More information needed](https://github.com/csebuetnlp/banglabert) ## Additional Information ### Dataset Curators [More information needed](https://github.com/csebuetnlp/banglabert) ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use the dataset, please cite the following paper: ``` @misc{bhattacharjee2021banglabert, title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding}, author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar}, year={2021}, eprint={2101.00204}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
flax-sentence-embeddings/Gender_Bias_Evaluation_Set
2021-07-26T04:14:18.000Z
[ "arxiv:1906.00591", "region:us" ]
flax-sentence-embeddings
null
null
null
2
3
**This dataset has been created as part of the Flax/JAX community week for testing the [flax-sentence-embeddings](https://huggingface.co/flax-sentence-embeddings) Sentence Similarity models for Gender Bias but can be used for other use-cases as well related to evaluating Gender Bias.** The Following Dataset has been created for Evaluating Gender Bias for different models, based on various stereotypical occupations. * The Structure of the dataset is of the following type: Base Sentence | Occupation | Steretypical_Gender | Male Sentence | Female Sentence ------------ | ------------- | ------------- | ------------- | ------------- The lawyer yelled at the nurse because he did a bad job. | nurse | female | The lawyer yelled at him because he did a bad job. | The lawyer yelled at her because she did a bad job. * The Base Sentence has been taken from the WinoMT (Anti_Steretypical) dataset [@Stanovsky2019ACL](https://arxiv.org/abs/1906.00591). **Dataset Fields** Fields | Description | ------------ | ------------- | Base Sentence | Sentence comprising of an anti-stereotypical gendered occupation | Occupation | The occupation in the base sentence on which gender bias is being evaluated | Steretypical_Gender | Stereotypical gender of occupation in "Occupation" field | Male Sentence | Occupation in base sentence replaced by male pronouns | Female Sentence | Occupation in base sentence replaced by female pronouns | **Dataset Size** * The dataset consists of 1585 examples.
gcaillaut/frwiki_good_pages_el
2022-07-04T12:36:42.000Z
[ "task_categories:other", "annotations_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:fr-FR", "language:fr", "license:wtfpl", "region:us" ]
gcaillaut
French Wikipedia dataset for Entity Linking
null
null
1
3
--- annotations_creators: - machine-generated language_creators: [] language: - fr-FR - fr license: - wtfpl multilinguality: - monolingual pretty_name: test size_categories: - unknown source_datasets: - original task_categories: - other task_ids: [] --- # Dataset Card for frwiki_good_pages_el ## Dataset Description - Repository: [frwiki_good_pages_el](https://github.com/GaaH/frwiki_good_pages_el) - Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr) ### Dataset Summary This dataset contains _featured_ and _good_ articles from the French Wikipédia. Pages are downloaded, as HTML files, from the [French Wikipedia website](https://fr.wikipedia.org). It is intended to be used to train Entity Linking (EL) systems. Links in articles are used to detect named entities. ### Languages - French ## Dataset Structure ``` { "title": "Title of the page", "qid": "QID of the corresponding Wikidata entity", "words": ["tokens"], "wikipedia": ["Wikipedia description of each entity"], "wikidata": ["Wikidata description of each entity"], "labels": ["NER labels"], "titles": ["Wikipedia title of each entity"], "qids": ["QID of each entity"], } ``` The `words` field contains the article’s text splitted on white-spaces. The other fields are list with same length as `words` and contains data only when the respective token in `words` is the __start of an entity__. For instance, if the _i-th_ token in `words` is an entity, then the _i-th_ element of `wikipedia` contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data. The only exception is the `labels` field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is `"O"`; if it is the first word of a multi-word entity, the label is `"B"`; otherwise the label is `"I"`.
gigant/romanian_speech_synthesis_0_8_1
2022-10-24T17:38:35.000Z
[ "task_categories:automatic-speech-recognition", "language:ro", "license:unknown", "region:us" ]
gigant
\ The Romanian speech synthesis (RSS) corpus was recorded in a hemianechoic chamber (anechoic walls and ceiling; floor partially anechoic) at the University of Edinburgh. We used three high quality studio microphones: a Neumann u89i (large diaphragm condenser), a Sennheiser MKH 800 (small diaphragm condenser with very wide bandwidth) and a DPA 4035 (headset-mounted condenser). Although the current release includes only speech data recorded via Sennheiser MKH 800, we may release speech data recorded via other microphones in the future. All recordings were made at 96 kHz sampling frequency and 24 bits per sample, then downsampled to 48 kHz sampling frequency. For recording, downsampling and bit rate conversion, we used ProTools HD hardware and software. We conducted 8 sessions over the course of a month, recording about 500 sentences in each session. At the start of each session, the speaker listened to a previously recorded sample, in order to attain a similar voice quality and intonation.
\ @article{Stan2011442, author = {Adriana Stan and Junichi Yamagishi and Simon King and Matthew Aylett}, title = {The {R}omanian speech synthesis ({RSS}) corpus: Building a high quality {HMM}-based speech synthesis system using a high sampling rate}, journal = {Speech Communication}, volume = {53}, number = {3}, pages = {442--450}, note = {}, abstract = {This paper first introduces a newly-recorded high quality Romanian speech corpus designed for speech synthesis, called ''RSS'', along with Romanian front-end text processing modules and HMM-based synthetic voices built from the corpus. All of these are now freely available for academic use in order to promote Romanian speech technology research. The RSS corpus comprises 3500 training sentences and 500 test sentences uttered by a female speaker and was recorded using multiple microphones at 96 kHz sampling frequency in a hemianechoic chamber. The details of the new Romanian text processor we have developed are also given. Using the database, we then revisit some basic configuration choices of speech synthesis, such as waveform sampling frequency and auditory frequency warping scale, with the aim of improving speaker similarity, which is an acknowledged weakness of current HMM-based speech synthesisers. As we demonstrate using perceptual tests, these configuration choices can make substantial differences to the quality of the synthetic speech. Contrary to common practice in automatic speech recognition, higher waveform sampling frequencies can offer enhanced feature extraction and improved speaker similarity for HMM-based speech synthesis.}, doi = {10.1016/j.specom.2010.12.002}, issn = {0167-6393}, keywords = {Speech synthesis, HTS, Romanian, HMMs, Sampling frequency, Auditory scale}, url = {http://www.sciencedirect.com/science/article/pii/S0167639310002074}, year = 2011 }
null
1
3
--- language: - ro license: - unknown size_categories: ro: - 1K<n<10K task_categories: - automatic-speech-recognition task_ids: [] pretty_name: Romanian Speech Synthesis --- ## Dataset Description - **Homepage:** https://romaniantts.com/rssdb/ - **Paper:** https://www.sciencedirect.com/science/article/abs/pii/S0167639310002074 ### Dataset Summary The Romanian speech synthesis (RSS) corpus was recorded in a hemianechoic chamber (anechoic walls and ceiling; floor partially anechoic) at the University of Edinburgh. We used three high quality studio microphones: a Neumann u89i (large diaphragm condenser), a Sennheiser MKH 800 (small diaphragm condenser with very wide bandwidth) and a DPA 4035 (headset-mounted condenser). Although the current release includes only speech data recorded via Sennheiser MKH 800, we may release speech data recorded via other microphones in the future. All recordings were made at 96 kHz sampling frequency and 24 bits per sample, then downsampled to 48 kHz sampling frequency. For recording, downsampling and bit rate conversion, we used ProTools HD hardware and software. We conducted 8 sessions over the course of a month, recording about 500 sentences in each session. At the start of each session, the speaker listened to a previously recorded sample, in order to attain a similar voice quality and intonation. ### Languages Romanian ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, called audio and its sentence. ### Data Fields - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - sentence: The sentence the user was prompted to speak ### Data Splits The speech material has been subdivided into portions for train and test. The train split consists of 3180 audio clips and the related sentences. The test split consists of 536 audio clips and the related sentences. ### Citation Information ``` @article{Stan2011442, author = {Adriana Stan and Junichi Yamagishi and Simon King and Matthew Aylett}, title = {The {R}omanian speech synthesis ({RSS}) corpus: Building a high quality {HMM}-based speech synthesis system using a high sampling rate}, journal = {Speech Communication}, volume = {53}, number = {3}, pages = {442--450}, note = {}, abstract = {This paper first introduces a newly-recorded high quality Romanian speech corpus designed for speech synthesis, called ''RSS'', along with Romanian front-end text processing modules and HMM-based synthetic voices built from the corpus. All of these are now freely available for academic use in order to promote Romanian speech technology research. The RSS corpus comprises 3500 training sentences and 500 test sentences uttered by a female speaker and was recorded using multiple microphones at 96 kHz sampling frequency in a hemianechoic chamber. The details of the new Romanian text processor we have developed are also given. Using the database, we then revisit some basic configuration choices of speech synthesis, such as waveform sampling frequency and auditory frequency warping scale, with the aim of improving speaker similarity, which is an acknowledged weakness of current HMM-based speech synthesisers. As we demonstrate using perceptual tests, these configuration choices can make substantial differences to the quality of the synthetic speech. Contrary to common practice in automatic speech recognition, higher waveform sampling frequencies can offer enhanced feature extraction and improved speaker similarity for HMM-based speech synthesis.}, doi = {10.1016/j.specom.2010.12.002}, issn = {0167-6393}, keywords = {Speech synthesis, HTS, Romanian, HMMs, Sampling frequency, Auditory scale}, url = {http://www.sciencedirect.com/science/article/pii/S0167639310002074}, year = 2011 } ``` ### Contributions [@gigant](https://huggingface.co/gigant) added this dataset.
gmnlp/tico19
2021-10-03T19:00:13.000Z
[ "region:us" ]
gmnlp
In response to the on-going crisis, several academic (Carnegie Mellon University, George Mason University, Johns Hopkins University) and industry (Amazon, Appen, Facebook, Google, Microsoft, Translated) partners have partnered with the Translators without Borders to prepare COVID-19 materials for a variety of the world’s languages to be used by professional translators and for training state-of-the-art Machine Translation (MT) models. The focus is on making emergency and crisis-related content available in as many languages as possible. The collected, curated and translated content across nearly 90 languages will be available to the professional translation as well the MT research community.
@article{DBLP:journals/corr/abs-2007-01788, author = {Antonios Anastasopoulos and Alessandro Cattelan and Zi{-}Yi Dou and Marcello Federico and Christian Federmann and Dmitriy Genzel and Francisco Guzm{\'{a}}n and Junjie Hu and Macduff Hughes and Philipp Koehn and Rosie Lazar and William Lewis and Graham Neubig and Mengmeng Niu and Alp {\"{O}}ktem and Eric Paquin and Grace Tang and Sylwia Tur}, title = {{TICO-19:} the Translation Initiative for Covid-19}, journal = {CoRR}, volume = {abs/2007.01788}, year = {2020}, url = {https://arxiv.org/abs/2007.01788}, archivePrefix = {arXiv}, eprint = {2007.01788}, timestamp = {Thu, 08 Apr 2021 11:46:39 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2007-01788.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
null
1
3
The TICO-19 evaluation set provides: * Predefined dev and test splits. We provide English-XX translation files under both the `dev` and `test` directories. * The dev set includes 971 sentences, and the test set includes 2100 sentences. * The corresponding IDs are listed in the `dev.ids` and `test.ids` files. The format of the files is: ~~~ {sourceLang}\t{targetLang}\t{sourceString}\t{targetString}\t{stringID}\t{sourceURL}\t{license}\t{translator_ID} ~~~ Currently available languages: * Amharic (am) * Arabic (ar) * Bengali (bn) * Kurdish Sorani (ckb) * Latin American Spanish (es-LA) * Farsi (fa) * French (fr) * Nigerian Fulfulde (fuv) * Hausa (ha) * Hindi (hi) * Indonesian (id) * Kurdish Kurmanji (ku) * Lingala (ln) * Luganda (lg) * Marathi (mr) * Malay (ms) * Muanmar (my) * Nepali (ne) * Oromo (om) * Dari (prs) * Pashto (ps) * Brazilian Portuguese (pt-BR) * Russian (ru) * Kinyarwanda (rw) * Somali (so) * kiSwahili (sw) * Ethiopian Tigrinya (ti) * Tagalog (tl) * Urdu (ur) * Chinese (Simplified) (zh) * Zulu (zu) All translations are released under a CC-0 license.
huggingartists/ariya
2022-10-25T09:23:42.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
3
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/ariya" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.070471 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/975b03ba317602498bed5321f12caebe.1000x1000x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/ariya"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">Ария (Ariya)</div> <a href="https://genius.com/artists/ariya"> <div style="text-align: center; font-size: 14px;">@ariya</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/ariya). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/ariya") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |22| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/ariya") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/john-k-samson
2022-10-25T09:32:13.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
3
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/john-k-samson" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.128555 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/0af64278d82733c4487d404fd3703ef7.894x894x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/john-k-samson"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">John K. Samson</div> <a href="https://genius.com/artists/john-k-samson"> <div style="text-align: center; font-size: 14px;">@john-k-samson</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/john-k-samson). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/john-k-samson") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |116| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/john-k-samson") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/lizer
2022-10-25T09:35:32.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
3
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/lizer" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.557761 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/70ba116490a041a960d1ca89418ce726.800x800x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/lizer"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">LIZER</div> <a href="https://genius.com/artists/lizer"> <div style="text-align: center; font-size: 14px;">@lizer</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/lizer). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/lizer") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |197| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/lizer") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/nautilus-pompilius
2022-10-25T09:39:44.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
3
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/nautilus-pompilius" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.142168 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/7099ea093179fc16f7bca186affd6c0f.533x533x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/nautilus-pompilius"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">Nautilus Pompilius (Наутилус Помпилиус)</div> <a href="https://genius.com/artists/nautilus-pompilius"> <div style="text-align: center; font-size: 14px;">@nautilus-pompilius</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/nautilus-pompilius). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/nautilus-pompilius") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |67| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/nautilus-pompilius") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/noize-mc
2022-10-25T09:40:13.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
3
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/noize-mc" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 1.387658 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/66f2036986237d3142c5fc9299615d37.1000x1000x1.png&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/noize-mc"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">Noize MC</div> <a href="https://genius.com/artists/noize-mc"> <div style="text-align: center; font-size: 14px;">@noize-mc</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/noize-mc). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/noize-mc") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |349| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/noize-mc") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/ot-rus
2022-10-25T09:40:40.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
3
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/ot-rus" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.419574 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/5b2286f88533601eda462ce44dd2ee56.776x776x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/ot-rus"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">O.T (RUS)</div> <a href="https://genius.com/artists/ot-rus"> <div style="text-align: center; font-size: 14px;">@ot-rus</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/ot-rus). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/ot-rus") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |117| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/ot-rus") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/sugar-ray
2022-10-25T09:45:22.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
3
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/sugar-ray" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.164888 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/8b5c8fe74f6176047b2b5681e0e0e2d4.273x273x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/sugar-ray"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">Sugar Ray</div> <a href="https://genius.com/artists/sugar-ray"> <div style="text-align: center; font-size: 14px;">@sugar-ray</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/sugar-ray). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/sugar-ray") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |117| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/sugar-ray") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/the-69-eyes
2022-10-25T09:46:18.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
null
0
3
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/the-69-eyes" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.162381 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/9e0451fa9d3f8cf38aa11994dbd934a8.600x600x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/the-69-eyes"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">The 69 Eyes</div> <a href="https://genius.com/artists/the-69-eyes"> <div style="text-align: center; font-size: 14px;">@the-69-eyes</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/the-69-eyes). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/the-69-eyes") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |168| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/the-69-eyes") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
jdepoix/junit_test_completion
2021-03-28T10:58:39.000Z
[ "region:us" ]
jdepoix
null
null
null
1
3
Entry not found
julien-c/reactiongif
2022-09-20T12:10:26.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv:2105.09967", "regio...
julien-c
null
null
null
1
3
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: reactiongif --- ## ReactionGIF > From https://github.com/bshmueli/ReactionGIF ![gif](https://huggingface.co/datasets/julien-c/reactiongif/resolve/main/hug.gif) ___ ## Excerpt from original repo readme ReactionGIF is a unique, first-of-its-kind dataset of 30K sarcastic tweets and their GIF reactions. To find out more about ReactionGIF, check out our ACL 2021 paper: * Shmueli, Ray and Ku, [Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter](https://arxiv.org/abs/2105.09967) ## Citation If you use our dataset, kindly cite the paper using the following BibTex entry: ```bibtex @misc{shmueli2021happy, title={Happy Dance, Slow Clap: Using Reaction {GIFs} to Predict Induced Affect on {Twitter}}, author={Boaz Shmueli and Soumya Ray and Lun-Wei Ku}, year={2021}, eprint={2105.09967}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
laion/laion_100m_vqgan_f8
2021-12-25T05:27:42.000Z
[ "region:us" ]
laion
null
null
null
2
3
# VQGAN (f8, 8192) embeddings for LAION-100M This dataset contains __VQGAN (f8, 8192)__ embeddings for the images from the first ~100 million image-text pairs of the [LAION-400M dataset](https://laion.ai/laion-400-open-dataset/). VQGAN was introduced in the paper ["Taming Transformers for High-Resolution Image Synthesis"](https://github.com/CompVis/taming-transformers) and adopted for training [DALLE-mini](https://github.com/borisdayma/dalle-mini). **Warning**: This large-scale dataset is non-curated. It was built for research purposes to enable testing model training on larger scale for broad researcher and other interested communities, and **is not meant for any real-world production or application.** [VQGAN (f8, 8192)](https://github.com/CompVis/taming-transformers#overview-of-pretrained-models) is a pretrained model with downsampling factor `f=8`, 8192 codebook entries, and Gumbel quantization. We did not perform any fine-tuning and used the VQGAN wrapper from the [DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch) repository for inference. Since LAION-400M contains 256x256 images, the model produces 1024 codes for each image. The data is provided as `*.parquet` files with the embeddings and meta information: - The embeddings (`code` column) are represented as binary data that can be decoded using `np.frombuffer(data, np.int16).reshape(32, 32)`. - The meta information (`caption`, `url`, and other columns) is the same as in the `*.parquet` files from LAION-400M (see description [here](https://laion.ai/laion-400-open-dataset/)). - This dataset does not contain the original images. The data corresponds to the shards `00000`, `00001`, ..., `09999` of LAION-400M. 0.07% of the shards were excluded since they were corrupted in the original dataset. The LAION-400M dataset is distributed under the [CC-BY 4.0 license](https://creativecommons.org/licenses/by/4.0/). The VQGAN models are distributed under the [MIT license](https://github.com/CompVis/taming-transformers/blob/master/License.txt).
lincoln/newsquadfr
2022-08-05T12:05:24.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:private", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "source_datasets:newspaper", "source_datasets:online", "language:fr-FR", "license:cc-b...
lincoln
null
null
null
2
3
--- annotations_creators: - private language_creators: null language: - fr-FR license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original - newspaper - online task_categories: - question-answering task_ids: - extractive-qa - open-domain-qa paperswithcode_id: null --- # Dataset Card for newsquadfr ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [lincoln.fr](https://www.lincoln.fr/) - **Repository:** [github/Lincoln-France](https://github.com/Lincoln-France) - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [email](labinnovation@mel.lincoln.fr) ### Dataset Summary newsquadfr is a small dataset created for Question Answering task. Contexts are paragraphs of articles extracted from nine online french newspaper during year 2020/2021. newsquadfr stands for Newspaper question answering dataset in french. inspired by Piaf and Squad dataset. 2 520 triplets context - question - answer. ```py from datasets import load_dataset ds_name = 'lincoln/newsquadfr' # exemple 1 ds_newsquad = load_dataset(ds_name) # exemple 2 data_files = {'train': 'train.json', 'test': 'test.json', 'valid': 'valid.json'} ds_newsquad = load_dataset(ds_name, data_files=data_files) # exemple 3 ds_newsquad = load_dataset(ds_name, data_files=data_files, split="valid+test") ``` (train set) | website | Nb | |---------------|-----| | cnews | 20 | | francetvinfo | 40 | | la-croix | 375 | | lefigaro | 160 | | lemonde | 325 | | lesnumeriques | 70 | | numerama | 140 | | sudouest | 475 | | usinenouvelle | 45 | ### Supported Tasks and Leaderboards - extractive-qa - open-domain-qa ### Languages Fr-fr ## Dataset Structure ### Data Instances ```json {'answers': {'answer_start': [53], 'text': ['manSuvre "agressive']}, 'article_id': 34138, 'article_title': 'Caricatures, Libye, Haut-Karabakh... Les six dossiers qui ' 'opposent Emmanuel Macron et Recep Tayyip Erdogan.', 'article_url': 'https://www.francetvinfo.fr/monde/turquie/caricatures-libye-haut-karabakh-les-six-dossiers-qui-opposent-emmanuel-macron-et-recep-tayyip-erdogan_4155611.html#xtor=RSS-3-[france]', 'context': 'Dans ce contexte déjà tendu, la France a dénoncé une manSuvre ' '"agressive" de la part de frégates turques à l\'encontre de l\'un ' "de ses navires engagés dans une mission de l'Otan, le 10 juin. " 'Selon Paris, la frégate Le Courbet cherchait à identifier un ' 'cargo suspecté de transporter des armes vers la Libye quand elle ' 'a été illuminée à trois reprises par le radar de conduite de tir ' "de l'escorte turque.", 'id': '2261', 'paragraph_id': 201225, 'question': "Qu'est ce que la France reproche à la Turquie?", 'website': 'francetvinfo'} ``` ### Data Fields - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int64` feature. - `article_id`: a `int64` feature. - `article_title`: a string feature. - `article_url`: a string feature. - `context`: a `string` feature. - `id`: a `string` feature. - `paragraph_id`: a `int64` feature. - `question`: a `string` feature. - `website`: a `string` feature. ### Data Splits | Split | Nb | |-------|----| | train |1650| | test |415 | | valid |455 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization Paragraphs were chosen according to theses rules: - parent article must have more than 71% ASCII characters - paragraphs size must be between 170 and 670 characters - paragraphs shouldn't contain "A LIRE" or "A VOIR AUSSI" Then, we stratified our original dataset to create this dataset according to : - website - number of named entities - paragraph size #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process Using Piaf annotation tools. Three different persons mostly. #### Who are the annotators? Lincoln ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases - Annotation is not well controlled - asking question on news is biaised ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information https://creativecommons.org/licenses/by-nc-sa/4.0/deed.fr ### Citation Information [Needs More Information]
mammut/mammut-corpus-venezuela-test-set
2022-10-22T08:58:48.000Z
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:es", "license:cc-by-nc-nd-4.0", "region:us" ]
mammut
null
null
null
0
3
--- annotations_creators: - no-annotation language_creators: - expert-generated language: - es language_bcp47: - es-VE license: - cc-by-nc-nd-4.0 multilinguality: - monolingual pretty_name: mammut-corpus-venezuela size_categories: - unknown source_datasets: - original task_categories: - sequence-modeling task_ids: - language-modeling --- # mammut-corpus-venezuela HuggingFace Dataset for testing purposes. The train dataset is `mammut/mammut-corpus-venezuela`. ## 1. How to use How to load this dataset directly with the datasets library: `>>> from datasets import load_dataset` `>>> dataset = load_dataset("mammut/mammut-corpus-venezuela")` ## 2. Dataset Summary **mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language. Each record in the dataset contains the author of the text (anonymized for conversation authors), the date on which the text entered in the corpus, the text which was automatically tokenized at sentence level for sources other than conversations, the source of the text, the title of the text, the number of tokens (excluding punctuation marks) of the text, and the linguistic register of the text. This is the test set for `mammut/mammut-corpus-venezuela` dataset. ## 3. Supported Tasks and Leaderboards This dataset can be used for language modeling testing. ## 4. Languages The dataset contains Venezuelan and Latin-American Spanish. ## 5. Dataset Structure Dataset structure features. ### 5.1 Data Instances An example from the dataset: "AUTHOR":"author in title", "TITLE":"Luis Alberto Buttó: Hecho en socialismo", "SENTENCE":"Históricamente, siempre fue así.", "DATE":"2021-07-04 07:18:46.918253", "SOURCE":"la patilla", "TOKENS":"4", "TYPE":"opinion/news", The average word token count are provided below: ### 5.2 Total of tokens (no spelling marks) Test: 4,876,739. ### 5.3 Data Fields The data have several fields: AUTHOR: author of the text. It is anonymized for conversation authors. DATE: date on which the text was entered in the corpus. SENTENCE: text. It was automatically tokenized for sources other than conversations. SOURCE: source of the texts. TITLE: title of the text from which SENTENCE originates. TOKENS: number of tokens (excluding punctuation marks) of SENTENCE. TYPE: linguistic register of the text. ### 5.4 Data Splits The mammut-corpus-venezuela dataset has 2 splits: train and test. Below are the statistics: Number of Instances in Split. Test: 157,011. ## 6. Dataset Creation ### 6.1 Curation Rationale The purpose of the mammut-corpus-venezuela dataset is language modeling. It can be used for pre-training a model from scratch or for fine-tuning on another pre-trained model. ### 6.2 Source Data **6.2.1 Initial Data Collection and Normalization** The data consists of opinion articles and text messages. It was collected by a process of web scraping from different portals, downloading of Telegram group chats’ history and selecting of Venezuelan and Latin-American Spanish corpus available online. The text from the web scraping process was separated in sentences and was automatically tokenized for sources other than conversations. An arrow parquet file was created. Text sources: El Estímulo (website), cinco8 (website), csm-1990 (oral speaking corpus), "El atajo más largo" (blog), El Pitazo (website), La Patilla (website), Venezuelan movies subtitles, Preseea Mérida (oral speaking corpus), Prodavinci (website), Runrunes (website), and Telegram group chats. **6.2.2 Who are the source language producers?** The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. ## 6.3 Annotations **6.3.1 Annotation process** At the moment the dataset does not contain any additional annotations. **6.3.2 Who are the annotators?** Not applicable. ### 6.4 Personal and Sensitive Information The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language. ## 7. Considerations for Using the Data ### 7.1 Social Impact of Dataset The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish. ### 7.2 Discussion of Biases Most of the content comes from political, economical and sociological opinion articles. Social biases may be present. ### 7.3 Other Known Limitations (If applicable, description of the other limitations in the data.) Not applicable. ## 8. Additional Information ### 8.1 Dataset Curators The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io. ### 8.2 Licensing Information Not applicable. ### 8.3 Citation Information Not applicable. ### 8.4 Contributions Not applicable.
mammut/mammut-corpus-venezuela
2022-10-22T09:00:04.000Z
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:es", "license:cc-by-nc-nd-4.0", "region:us" ]
mammut
null
null
null
0
3
--- annotations_creators: - no-annotation language_creators: - expert-generated language: - es language_bcp47: - es-VE license: - cc-by-nc-nd-4.0 multilinguality: - monolingual pretty_name: mammut-corpus-venezuela size_categories: - unknown source_datasets: - original task_categories: - sequence-modeling task_ids: - language-modeling --- # mammut-corpus-venezuela HuggingFace Dataset ## 1. How to use How to load this dataset directly with the datasets library: `>>> from datasets import load_dataset` `>>> dataset = load_dataset("mammut-corpus-venezuela")` ## 2. Dataset Summary **mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language. Each record in the dataset contains the author of the text (anonymized for conversation authors), the date on which the text entered in the corpus, the text which was automatically tokenized at sentence level for sources other than conversations, the source of the text, the title of the text, the number of tokens (excluding punctuation marks) of the text, and the linguistic register of the text. The dataset counts with a train split and a test split. ## 3. Supported Tasks and Leaderboards This dataset can be used for language modeling. ## 4. Languages The dataset contains Venezuelan and Latin-American Spanish. ## 5. Dataset Structure Dataset structure features. ### 5.1 Data Instances An example from the dataset: "AUTHOR":"author in title", "TITLE":"Luis Alberto Buttó: Hecho en socialismo", "SENTENCE":"Históricamente, siempre fue así.", "DATE":"2021-07-04 07:18:46.918253", "SOURCE":"la patilla", "TOKENS":"4", "TYPE":"opinion/news", The average word token count are provided below: ### 5.2 Total of tokens (no spelling marks) Train: 92,431,194. Test: 4,876,739 (in another file). ### 5.3 Data Fields The data have several fields: AUTHOR: author of the text. It is anonymized for conversation authors. DATE: date on which the text was entered in the corpus. SENTENCE: text. It was automatically tokenized for sources other than conversations. SOURCE: source of the texts. TITLE: title of the text from which SENTENCE originates. TOKENS: number of tokens (excluding punctuation marks) of SENTENCE. TYPE: linguistic register of the text. ### 5.4 Data Splits The mammut-corpus-venezuela dataset has 2 splits: train and test. Below are the statistics: Number of Instances in Split. Train: 2,983,302. Test: 157,011. ## 6. Dataset Creation ### 6.1 Curation Rationale The purpose of the mammut-corpus-venezuela dataset is language modeling. It can be used for pre-training a model from scratch or for fine-tuning on another pre-trained model. ### 6.2 Source Data **6.2.1 Initial Data Collection and Normalization** The data consists of opinion articles and text messages. It was collected by a process of web scraping from different portals, downloading of Telegram group chats’ history and selecting of Venezuelan and Latin-American Spanish corpus available online. The text from the web scraping process was separated in sentences and was automatically tokenized for sources other than conversations. An arrow parquet file was created. Text sources: El Estímulo (website), cinco8 (website), csm-1990 (oral speaking corpus), "El atajo más largo" (blog), El Pitazo (website), La Patilla (website), Venezuelan movies subtitles, Preseea Mérida (oral speaking corpus), Prodavinci (website), Runrunes (website), and Telegram group chats. **6.2.2 Who are the source language producers?** The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. ## 6.3 Annotations **6.3.1 Annotation process** At the moment the dataset does not contain any additional annotations. **6.3.2 Who are the annotators?** Not applicable. ### 6.4 Personal and Sensitive Information The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language. ## 7. Considerations for Using the Data ### 7.1 Social Impact of Dataset The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish. ### 7.2 Discussion of Biases Most of the content comes from political, economical and sociological opinion articles. Social biases may be present. ### 7.3 Other Known Limitations (If applicable, description of the other limitations in the data.) Not applicable. ## 8. Additional Information ### 8.1 Dataset Curators The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io. ### 8.2 Licensing Information Not applicable. ### 8.3 Citation Information Not applicable. ### 8.4 Contributions Not applicable.
midas/kp20k
2023-09-25T05:14:59.000Z
[ "region:us" ]
midas
\
@InProceedings{meng-EtAl:2017:Long, author = {Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu}, title = {Deep Keyphrase Generation}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, month = {July}, year = {2017}, address = {Vancouver, Canada}, publisher = {Association for Computational Linguistics}, pages = {582--592}, url = {http://aclweb.org/anthology/P17-1054} }
null
2
3
A dataset for benchmarking keyphrase extraction and generation techniques from abstracts of English scientific papers. For more details about the dataset please refer the original paper - [http://memray.me/uploads/acl17-keyphrase-generation.pdf](http://memray.me/uploads/acl17-keyphrase-generation.pdf). Data source - [https://github.com/memray/seq2seq-keyphrase](https://github.com/memray/seq2seq-keyphrase) ## Dataset Summary ## Dataset Structure ## Dataset Statistics ### Data Fields - **id**: unique identifier of the document. - **document**: Whitespace separated list of words in the document. - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all. - **extractive_keyphrases**: List of all the present keyphrases. - **abstractive_keyphrase**: List of all the absent keyphrases. ### Data Splits |Split| No. of datapoints | |--|--| | Train | 530,809 | | Test | 20,000| | Validation | 20,000| ## Usage ### Full Dataset ```python from datasets import load_dataset # get entire dataset dataset = load_dataset("midas/kp20k", "raw") # sample from the train split print("Sample from training dataset split") train_sample = dataset["train"][0] print("Fields in the sample: ", [key for key in train_sample.keys()]) print("Tokenized Document: ", train_sample["document"]) print("Document BIO Tags: ", train_sample["doc_bio_tags"]) print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"]) print("\n-----------\n") # sample from the validation split print("Sample from validation dataset split") validation_sample = dataset["validation"][0] print("Fields in the sample: ", [key for key in validation_sample.keys()]) print("Tokenized Document: ", validation_sample["document"]) print("Document BIO Tags: ", validation_sample["doc_bio_tags"]) print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"]) print("\n-----------\n") # sample from the test split print("Sample from test dataset split") test_sample = dataset["test"][0] print("Fields in the sample: ", [key for key in test_sample.keys()]) print("Tokenized Document: ", test_sample["document"]) print("Document BIO Tags: ", test_sample["doc_bio_tags"]) print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"]) print("\n-----------\n") ``` **Output** ```bash ``` ### Keyphrase Extraction ```python from datasets import load_dataset # get the dataset only for keyphrase extraction dataset = load_dataset("midas/kp20k", "extraction") print("Samples for Keyphrase Extraction") # sample from the train split print("Sample from training data split") train_sample = dataset["train"][0] print("Fields in the sample: ", [key for key in train_sample.keys()]) print("Tokenized Document: ", train_sample["document"]) print("Document BIO Tags: ", train_sample["doc_bio_tags"]) print("\n-----------\n") # sample from the validation split print("Sample from validation data split") validation_sample = dataset["validation"][0] print("Fields in the sample: ", [key for key in validation_sample.keys()]) print("Tokenized Document: ", validation_sample["document"]) print("Document BIO Tags: ", validation_sample["doc_bio_tags"]) print("\n-----------\n") # sample from the test split print("Sample from test data split") test_sample = dataset["test"][0] print("Fields in the sample: ", [key for key in test_sample.keys()]) print("Tokenized Document: ", test_sample["document"]) print("Document BIO Tags: ", test_sample["doc_bio_tags"]) print("\n-----------\n") ``` ### Keyphrase Generation ```python # get the dataset only for keyphrase generation dataset = load_dataset("midas/kp20k", "generation") print("Samples for Keyphrase Generation") # sample from the train split print("Sample from training data split") train_sample = dataset["train"][0] print("Fields in the sample: ", [key for key in train_sample.keys()]) print("Tokenized Document: ", train_sample["document"]) print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"]) print("\n-----------\n") # sample from the validation split print("Sample from validation data split") validation_sample = dataset["validation"][0] print("Fields in the sample: ", [key for key in validation_sample.keys()]) print("Tokenized Document: ", validation_sample["document"]) print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"]) print("\n-----------\n") # sample from the test split print("Sample from test data split") test_sample = dataset["test"][0] print("Fields in the sample: ", [key for key in test_sample.keys()]) print("Tokenized Document: ", test_sample["document"]) print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"]) print("\n-----------\n") ``` ## Citation Information Please cite the works below if you use this dataset in your work. ``` @InProceedings{meng-EtAl:2017:Long, author = {Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu}, title = {Deep Keyphrase Generation}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, month = {July}, year = {2017}, address = {Vancouver, Canada}, publisher = {Association for Computational Linguistics}, pages = {582--592}, url = {http://aclweb.org/anthology/P17-1054} } @article{mahata2022ldkp, title={LDKP: A Dataset for Identifying Keyphrases from Long Scientific Documents}, author={Mahata, Debanjan and Agarwal, Navneet and Gautam, Dibya and Kumar, Amardeep and Parekh, Swapnil and Singla, Yaman Kumar and Acharya, Anish and Shah, Rajiv Ratn}, journal={arXiv preprint arXiv:2203.15349}, year={2022} } ``` ## Contributions Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset
projecte-aina/casum
2023-09-13T12:49:03.000Z
[ "task_categories:summarization", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-nc-4.0", "arxiv:2202.06871", "region:us" ]
projecte-aina
CaSum is a summarization dataset. It is extracted from a newswire corpus crawled from the Catalan News Agency. The corpus consists of 217,735 instances that are composed by the headline and the body.
@misc{degibert2022sequencetosequence, title={Sequence-to-Sequence Resources for Catalan}, author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero}, year={2022}, eprint={2202.06871}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
0
3
--- annotations_creators: - machine-generated language_creators: - expert-generated language: - ca license: - cc-by-nc-4.0 multilinguality: - monolingual size_categories: - unknown source_datasets: [] task_categories: - summarization task_ids: [] pretty_name: casum --- # Dataset Card for CaSum ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Paper:** [Sequence to Sequence Resources for Catalan](https://arxiv.org/pdf/2202.06871.pdf) - **Point of Contact:** [Ona de Gibert Bonet](mailto:ona.degibert@bsc.es) ### Dataset Summary CaSum is a summarization dataset. It is extracted from a newswire corpus crawled from the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)). The corpus consists of 217,735 instances that are composed by the headline and the body. ### Supported Tasks and Leaderboards The dataset can be used to train a model for abstractive summarization. Success on this task is typically measured by achieving a high Rouge score. The [mbart-base-ca-casum](https://huggingface.co/projecte-aina/bart-base-ca-casum) model currently achieves a 41.39. ### Languages The dataset is in Catalan (`ca-ES`). ## Dataset Structure ### Data Instances ``` { 'summary': 'Mapfre preveu ingressar 31.000 milions d’euros al tancament de 2018', 'text': 'L’asseguradora llançarà la seva filial Verti al mercat dels EUA a partir de 2017 ACN Madrid.-Mapfre preveu assolir uns ingressos de 31.000 milions d'euros al tancament de 2018 i destinarà a retribuir els seus accionistes com a mínim el 50% dels beneficis del grup durant el període 2016-2018, amb una rendibilitat mitjana a l’entorn del 5%, segons ha anunciat la companyia asseguradora durant la celebració aquest divendres de la seva junta general d’accionistes. La firma asseguradora també ha avançat que llançarà la seva filial d’automoció i llar al mercat dels EUA a partir de 2017. Mapfre ha recordat durant la junta que va pagar més de 540 milions d'euros en impostos el 2015, amb una taxa impositiva efectiva del 30,4 per cent. La companyia també ha posat en marxa el Pla de Sostenibilitat 2016-2018 i el Pla de Transparència Activa, “que han de contribuir a afermar la visió de Mapfre com a asseguradora global de confiança”, segons ha informat en un comunicat.' } ``` ### Data Fields - `summary` (str): Summary of the piece of news - `text` (str): The text of the piece of news ### Data Splits We split our dataset into train, dev and test splits - train: 197,735 examples - validation: 10,000 examples - test: 10,000 examples ## Dataset Creation ### Curation Rationale We created this corpus to contribute to the development of language models in Catalan, a low-resource language. There exist few resources for summarization in Catalan. ### Source Data #### Initial Data Collection and Normalization We obtained each headline and its corresponding body of each news piece on the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) website and applied the following cleaning pipeline: deduplicating the documents, removing the documents with empty attributes, and deleting some boilerplate sentences. #### Who are the source language producers? The news portal Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)). ### Annotations The dataset is unannotated. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Since all data comes from public websites, no anonymization process was performed. ## Considerations for Using the Data ### Social Impact of Dataset We hope this corpus contributes to the development of summarization models in Catalan, a low-resource language. ### Discussion of Biases We are aware that since the data comes from unreliable web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact. ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) This work was funded by MT4All CEF project and [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Licensing information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/). ### BibTeX citation If you use any of these resources (datasets or models) in your work, please cite our latest preprint: ```bibtex @misc{degibert2022sequencetosequence, title={Sequence-to-Sequence Resources for Catalan}, author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero}, year={2022}, eprint={2202.06871}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions [N/A]
projecte-aina/teca
2023-09-13T12:48:36.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-nc-nd-4.0", "arxiv:2107.07903", "region:us" ]
projecte-aina
TECA consists of two subsets of textual entailment in Catalan, *catalan_TE1* and *vilaweb_TE*, which contain 14997 and 6166 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral). This dataset was developed by BSC TeMU as part of the AINA project and intended as part of the Catalan Language Understanding Benchmark (CLUB).
@inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", }
null
0
3
--- YAML tags: annotations_creators: - expert-generated language_creators: - found language: - ca license: - cc-by-nc-nd-4.0 multilinguality: - monolingual pretty_name: teca size_categories: - unknown source_datasets: [] task_categories: - text-classification task_ids: - natural-language-inference --- # Dataset Card for TE-ca ## Dataset Description - **Website:** https://zenodo.org/record/4761458 - **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903) - **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es) ### Dataset Summary TE-ca is a dataset of textual entailment in Catalan, which contains 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral). This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/). ### Supported Tasks and Leaderboards Textual entailment, Text classification, Language Model ### Languages The dataset is in Catalan (`ca-ES`). ## Dataset Structure ### Data Instances Three JSON files, one for each split. ### Example: <pre> { "id": 3247, "premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions", "hypothesis": "S'acorden unes recomanacions per les persones migrades a Marràqueix", "label": "0" }, { "id": 2825, "premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions", "hypothesis": "Les persones migrades seran acollides a Marràqueix", "label": "1" }, { "id": 2431, "premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions", "hypothesis": "L'acord impulsat per l'ONU lluny de tancar-se", "label": "2" }, </pre> ### Data Fields - premise: text - hypothesis: text related to the premise - label: relation between premise and hypothesis: * 0: entailment * 1: neutral * 2: contradiction ### Data Splits * dev.json: 2116 examples * test.json: 2117 examples * train.json: 16930 examples ## Dataset Creation ### Curation Rationale We created this dataset to contribute to the development of language models in Catalan, a low-resource language. ### Source Data Source sentences are extracted from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349) and from [VilaWeb](https://www.vilaweb.cat) newswire. #### Initial Data Collection and Normalization 12000 sentences from the BSC [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349), together with 6200 headers from the Catalan news site [VilaWeb](https://www.vilaweb.cat), were chosen randomly. We filtered them by different criteria, such as length and stand-alone intelligibility. For each selected text, we commissioned 3 hypotheses (one for each entailment category) to be written by a team of native annotators. Some sentence pairs were excluded because of inconsistencies. #### Who are the source language producers? The Catalan Textual Corpus corpus consists of several corpora gathered from web crawling and public corpora. More information can be found [here](https://doi.org/10.5281/zenodo.4519349). [VilaWeb](https://www.vilaweb.cat) is a Catalan newswire. ### Annotations #### Annotation process We commissioned 3 hypotheses (one for each entailment category) to be written by a team of annotators. #### Who are the annotators? Annotators are a team of native language collaborators from two independent companies. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset We hope this dataset contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Licensing Information This work is licensed under an <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>. ### Citation Information ``` @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` [DOI](https://doi.org/10.5281/zenodo.4529183)
projecte-aina/wnli-ca
2023-09-13T12:42:10.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|glue", "language:ca", "license:cc-by-4.0", "region:us" ]
projecte-aina
professional translation into Catalan of Winograd NLI dataset as published in GLUE Benchmark. The Winograd NLI dataset presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
ADD CITATION
null
1
3
--- YAML tags: annotations_creators: - expert-generated language_creators: - found language: - ca license: - cc-by-4.0 multilinguality: - monolingual pretty_name: wnli-ca size_categories: - unknown source_datasets: - extended|glue task_categories: - text-classification task_ids: - natural-language-inference --- # WNLI-ca ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html - **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es) ### Dataset Summary "A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html). The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0). This dataset is a professional translation into Catalan of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks). Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). ### Supported Tasks and Leaderboards Textual entailment, Text classification, Language Model. ### Languages The dataset is in Catalan (`ca-ES`) ## Dataset Structure ### Data Instances Three tsv files. ### Data Fields - index - sentence 1: first sentence of the pair - sentence 2: second sentence of the pair - label: relation between the two sentences: * 0: the second sentence does not entail a correct interpretation of the first one (neutral) * 1: the second sentence entails a correct interpretation of the first one (entailment) ### Example | index | sentence 1 | sentence 2 | label | | ------- |----------- | --------- | ----- | | 0 | Vaig clavar una agulla en una pastanaga. Quan la vaig treure, tenia un forat. | La pastanaga tenia un forat. | 1 | | 1 | En Joan no podia veure l’escenari amb en Guillem davant seu perquè és molt baix. | En Joan és molt baix. | 1 | | 2 | Els policies van arrestar tots els membres de la banda. Volien aturar el tràfic de drogues del barri. | Els policies volien aturar el tràfic de drogues del barri. | 1 | | 3 | L’Esteve segueix els passos d’en Frederic en tot. L’influencia moltíssim. | L’Esteve l’influencia moltíssim. | 0 | ### Data Splits - wnli-train-ca.csv: 636 - wnli-dev-ca.csv: 72 - wnli-test-shuffled-ca.csv: 147 ## Dataset Creation ### Curation Rationale We translated this dataset to contribute to the development of language models in Catalan, a low-resource language, and to allow inter-lingual comparisons. ### Source Data - [GLUE Benchmark site](https://gluebenchmark.com) #### Initial Data Collection and Normalization This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan, commissioned by BSC TeMU within the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/). For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html). #### Who are the source language producers? For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html). ### Annotations #### Annotation process We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan. #### Who are the annotators? Translation was commisioned to a professional translator. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset This dataset contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es). This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Licensing Information This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>. ### Contributions [N/A]
sc2qa/sc2qa_commoncrawl
2022-03-30T18:34:27.000Z
[ "arxiv:2109.04689", "region:us" ]
sc2qa
\
@article{zhou2021generating, author = {Li Zhou, Kevin Small, Yong Zhang, Sandeep Atluri}, title = "{Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning}", conference = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021)}, year = 2021, }
null
0
3
For details, please refer to the following links. Github repo: https://github.com/amazon-research/SC2QA-DRIL Paper: [Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning](https://arxiv.org/pdf/2109.04689.pdf)
seamew/Weibo
2021-10-09T13:58:21.000Z
[ "region:us" ]
seamew
null
null
null
1
3
Entry not found
valurank/news-12factor
2022-10-21T13:35:36.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
valurank
null
null
null
0
3
--- license: - other language: - en multilinguality: - monolingual task_categories: - text-classification task_ids: - multi-class-classification --- # Dataset Card for news-12factor ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) - [Annotations](#annotations) ## Dataset Description 80+ news articles with url, title, body text, scored on 12 quality factors and assigned a single rank. ## Languages The text in the dataset is in English ## Dataset Structure [Needs More Information] ## Source Data URL data was scraped using [news-please](https://github.com/fhamborg/news-please) ## Annotations Articles were manually annotated by Alex on a 12-factor score card.
webimmunization/COVID-19-vaccine-attitude-tweets
2022-10-25T10:01:50.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:intent-classification", "annotations_creators:crowdsourced", "language_creators:other", "multilinguality:monolingual", "size_categories:54KB", "source_datasets:original", "language:en", "license:cc-by-4.0", "re...
webimmunization
null
null
null
1
3
--- annotations_creators: - crowdsourced language_creators: - other language: - en license: [cc-by-4.0] multilinguality: - monolingual pretty_name: twitter covid19 tweets size_categories: - 54KB source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification - intent-classification --- # Dataset Card for COVID-19-vaccine-attitude-tweets ## Dataset Description - **Paper:** [Be Careful Who You Follow. The Impact of the Initial Set of Friends on COVID-19 Vaccine tweets](https://www.researchgate.net/publication/355726080_Be_Careful_Who_You_Follow_The_Impact_of_the_Initial_Set_of_Friends_on_COVID-19_Vaccine_Tweets) - **Point of Contact:** [Izabela Krysinska](izabela.krysinska@doctorate.put.poznan.pl) ### Dataset Summary The dataset consists of 2564 manually annotated tweets related to COVID-19 vaccines. The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines. Tweets are in English. The dataset was curated in such a way as to maximize the likelihood of tweets with a strong emotional tone. We have assumed the existence of three classes: - PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19 - NEUTRAL (label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.) - AGAINST (label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc. The dataset does not contain the content of Twitter statuses. Original tweets can be obtained via Twitter API. You can use [`twitter`](https://python-twitter.readthedocs.io/en/latest/index.html) library: ```python import twitter from datasets import load_dataset api = twitter.Api(consumer_key=<consumer key>, consumer_secret=<consumer secret>, access_token_key=<access token>, access_token_secret=<access token secret>, sleep_on_rate_limit=True) tweets = load_dataset('webimmunization/COVID-19-vaccine-attitude-tweets') def add_tweet_content(example): try: status = api.GetStatus(tweet_id) except twitter.TwitterError as err: print(err) status = {'text': None} return {'status': status.text} tweets_with_text = tweets.map(add_tweet_content) ``` ### Supported Tasks and Leaderboards - `text-classification`: The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines, whether the tweet presents a positive, neutral or negative attitude. Success on this task can be measured by achieving a *high* AUROC or [F1](https://huggingface.co/metrics/f1). ### Languages [EN] English. The text that can be accessed via the Twitter API using the identifiers in this dataset is in English. ## Dataset Structure ### Data Instances The 1st column is Twitter Status ID and the 2nd column is the label denoting the attitude towards vaccines against COVID-19. Example: ``` { 'id': '1387627601955545089', 'attitude': 0 # positive attitude } ``` ### Data Fields - `attitude`: attitude towards vaccines against COVID-19. `0` denotes positive attitude, `1` denotes neutral attitude, `2` dentoes negative attitude. - `id`: Twitter status id ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data Social media posts. #### Initial Data Collection and Normalization We queried the Twitter search engine with manually curated hashtags such as \#coronavaccine, \#getvaccinated, #mRNA, #PfizerGang, #VaccineNoThankYou, #vaccinesWork, #BillGatesVaccine, #VaccinesKill, etc. to fetch tweets related to COVID-19 vaccines. Then we have searched for tweets with conspicuous emotional load, both negative and positive. Once we had the set of emotionally loaded tweets we started fetching other tweets posted by the authors of emotional tweets. We'd been collecting tweets from mid of April for about a month. Then we filtered out tweets that were not related to the vaccines. In this manner, we collected tweets that are more probable to be emotional rather than strictly informative. #### Who are the source language producers? The language producers are users of Twitter. ### Annotations #### Annotation process We have manually annotated over 2500 tweets using the following annotation protocol. We have assumed the existence of three classes: - PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19 - NEUTRAL(label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.) - AGAINST(label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc. The PRO class consists of tweets which explicitly urge people to go get vaccinated. The AGAINST class contains tweets which explicitly warn people against getting the vaccine. Tweet annotation has been conducted using [Prodigy](https://prodi.gy) tool. The annotators were provided with the following instructions: - Do not spend too much time on a tweet and try to make a quick decision, the slight discrepancy in labeling (especially if you are deciding between *PRO* and *NEUTRAL*) will not affect the classifier significantly. - Assign tweets that seem to originate from news sites as *NEUTRAL* and use *PRO* for tweets that express unequivocal support for getting the vaccine. - There are many tweets on vaccination and politics. They should fall into the *NEUTRAL* class unless they contain a clear call to action: go get vaccinated! - Use only the contents of the tweet to label it, do not open the links if the content of a tweet is not enough for labeling (e.g., “Hmm, interesting, https://t.co/ki345o2i345”), skip such tweets instead of giving it a label. - Use the option to skip a tweet only when there is nothing in the tweet except for an URL or a few meaningless words, otherwise do not hesitate to put the tweet in the *NEUTRAL* class. We have asked 8 annotators to annotate the same set of 100 tweets using the guidelines proposed in the annotation protocol to verify the annotation protocol. We have measured the interrater agreement using the Fliess' kappa coefficient <cite>[Fleiss 1971][2]</cite>. The results were as follows: - when measuring the agreement with four possible classes (*PRO*, *NEUTRAL*, *AGAINST*, *NONE*, where the last class represents tweets that were rejected from annotation), the agreement is `kappa=0.3940` - when measuring the agreement after removing tweets that were rejected, the agreement is `kappa=0.3560` - when measuring the agreement if rejected tweets are classified as *NEUTRAL*, the agreement is `kappa=0.3753` - when measuring the agreement for only two classes (using *PRO*, *NEUTRAL* and *NONE* as one class, and *AGAINST* as another class), the agreement is `kappa=0.5419` #### Who are the annotators? [Members of the #WebImmunization project](https://webimmunization.cm-uj.krakow.pl/en/team/) ### Personal and Sensitive Information According to the Twitter developer policy, if displayed content ceases to be available through the Twitter API, it can not be obtained from other sources. Thus, we provide tweets' ids to maintain the integrity of all Twitter content with Twitter service. The proper way to extract tweets' content is via Twitter API. Whenever Twitter decided to suspend the author of the tweet, or the author decides to delete their tweet it won't be possible to obtain the tweet's content with this dataset. ## Considerations for Using the Data ### Social Impact of Dataset The COVID-19 is a serious global health threat that can be mitigated only by public health interventions that require massive participation. Mass vaccination against COVID-19 is one of the most effective and economically promising solutions to stop the spread of the Sars-Cov-2 virus, which is responsible for the pandemic. Understanding how misinformation about COVID-19 vaccines is spreading in one of the globally most important social networks is paramount. ### Discussion of Biases [Needs More Information] ### Other Known Limitations #### Interannotator agreement According to a popular interpretation of Fleiss' kappa <cite>[Landis 1977][2]</cite>, the annotators are in fair agreement in the first three scenarios and moderate agreement in the last scenario. These results suggest that the annotators are struggling to distinguish between *PRO* and *NEUTRAL* classes, and sometimes they have divergent opinions on whether the tweet should be rejected from training. Still, they are coherent when labeling *AGAINST* tweets. #### Suspended account & deleted tweets Some of the statuses from the dataset can not be obtained due to account suspension or tweet deletion. The last time we check (15th of November, 2021), about 12% of tweets were authored by suspended accounts and about 10% were already deleted. ### Dataset Curators Agata Olejniuk Poznan University of Technology, Poland The research leading to these results has received funding from the EEA Financial Mechanism 2014-2021. Project registration number: 2019/35/J/HS6 /03498. ### Licensing Information [Needs More Information] ### Citation Information ``` @inproceedings{krysinska2021careful, title={Be Careful Who You Follow: The Impact of the Initial Set of Friends on COVID-19 Vaccine Tweets}, author={Krysi{\'n}ska, Izabela and W{\'o}jtowicz, Tomi and Olejniuk, Agata and Morzy, Miko{\l}aj and Piasecki, Jan}, booktitle={Proceedings of the 2021 Workshop on Open Challenges in Online Social Networks}, pages={1--8}, year={2021} } ``` [DOI](https://doi.org/10.1145/3472720.3483619) ### Contributions We would like to cordially thank the [members of the #WebImmunization project](https://webimmunization.cm-uj.krakow.pl/en/team/) for helping with data annotation. ## References [1]: Joseph L Fleiss. Measuring nominal scale agreement among many raters.Psychological bulletin, 76(5):378, 1971. [2]: J Richard Landis and Gary G Koch. The measurement of observer agreement for categorical data. biometrics, pages 159–174, 1977.
xiaobendanyn/tacred
2021-10-29T09:23:40.000Z
[ "region:us" ]
xiaobendanyn
null
null
null
4
3
Entry not found
yuvalkirstain/quality
2021-12-30T10:05:25.000Z
[ "region:us" ]
yuvalkirstain
null
null
null
1
3
Entry not found
zapsdcn/imdb
2021-12-08T20:18:28.000Z
[ "region:us" ]
zapsdcn
null
null
null
0
3
Entry not found
AhmedSSoliman/QRCD
2022-03-06T18:58:06.000Z
[ "region:us" ]
AhmedSSoliman
null
null
null
0
3
This dataset is presented for the task of Answering Questions on the Holy Qur'an. https://sites.google.com/view/quran-qa-2022 QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are coupled with their extracted answers to constitute 1,337 question-passage-answer triplets. It is split into training (65%), development (10%), and test (25%) sets. QRCD is a JSON Lines (JSONL) file; each line is a JSON object that comprises a question-passage pair, along with its answers extracted from the accompanying passage. The dataset adopts the format shown below. The sample below has two JSON objects, one for each of the above two questions.
Marianina/sentiment-banking
2022-03-08T19:09:50.000Z
[ "region:us" ]
Marianina
null
null
null
0
3
Entry not found
laion/laion1B-nolang
2022-03-09T15:04:35.000Z
[ "license:cc-by-4.0", "region:us" ]
laion
null
null
null
4
3
--- license: cc-by-4.0 ---
ai4bharat/IndicQuestionGeneration
2022-10-13T06:08:25.000Z
[ "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:98K<n<98K", "source_datasets:we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages.", ...
ai4bharat
This is the Question Generation dataset released as part of IndicNLG Suite. Each example has five fields: id, squad_id, answer, context and question. We create this dataset in eleven languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is a translated data. The examples in each language are exactly similar but in different languages. The number of examples in each language is 98,027.
@inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437" }
null
1
3
--- annotations_creators: - no-annotation language_creators: - found language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te license: - cc-by-nc-4.0 multilinguality: - multilingual pretty_name: IndicQuestionGeneration size_categories: - 98K<n<98K source_datasets: - we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages. task_categories: - conditional-text-generation task_ids: - conditional-text-generation-other-question-generation --- # Dataset Card for "IndicQuestionGeneration" ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite - **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437) - **Point of Contact:** ### Dataset Summary IndicQuestionGeneration is the question generation dataset released as part of IndicNLG Suite. Each example has five fields: id, squad_id, answer, context and question. We create this dataset in eleven languages, including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is translated data. The examples in each language are exactly similar but in different languages. The number of examples in each language is 98,027. ### Supported Tasks and Leaderboards **Tasks:** Question Generation **Leaderboards:** Currently there is no Leaderboard for this dataset. ### Languages - `Assamese (as)` - `Bengali (bn)` - `Gujarati (gu)` - `Kannada (kn)` - `Hindi (hi)` - `Malayalam (ml)` - `Marathi (mr)` - `Oriya (or)` - `Punjabi (pa)` - `Tamil (ta)` - `Telugu (te)` ## Dataset Structure ### Data Instances One random example from the `hi` dataset is given below in JSON format. ``` { "id": 8, "squad_id": "56be8e613aeaaa14008c90d3", "answer": "अमेरिकी फुटबॉल सम्मेलन", "context": "अमेरिकी फुटबॉल सम्मेलन (एएफसी) के चैंपियन डेनवर ब्रोंकोस ने नेशनल फुटबॉल कांफ्रेंस (एनएफसी) की चैंपियन कैरोलिना पैंथर्स को 24-10 से हराकर अपना तीसरा सुपर बाउल खिताब जीता।", "question": "एएफसी का मतलब क्या है?" } ``` ### Data Fields - `id (string)`: Unique identifier. - `squad_id (string)`: Unique identifier in Squad dataset. - `answer (strings)`: Answer as one of the two inputs. - `context (string)`: Context, the other input. - `question (string)`: Question, the output. ### Data Splits Here is the number of samples in each split for all the languages. Language | ISO 639-1 Code | Train | Dev | Test | ---------- | ---------- | ---------- | ---------- | ---------- | Assamese | as | 69,979 | 17,495 | 10,553 | Bengali | bn | 69,979 | 17,495 | 10,553 | Gujarati | gu | 69,979 | 17,495 | 10,553 | Hindi | hi | 69,979 | 17,495 | 10,553 | Kannada | kn | 69,979 | 17,495 | 10,553 | Malayalam | ml | 69,979 | 17,495 | 10,553 | Marathi | mr | 69,979 | 17,495 | 10,553 | Oriya | or | 69,979 | 17,495 | 10,553 | Punjabi | pa | 69,979 | 17,495 | 10,553 | Tamil | ta | 69,979 | 17,495 | 10,553 | Telugu | te | 69,979 | 17,495 | 10,553 | ## Dataset Creation ### Curation Rationale [Detailed in the paper](https://arxiv.org/abs/2203.05437) ### Source Data Squad Dataset(https://rajpurkar.github.io/SQuAD-explorer/) #### Initial Data Collection and Normalization [Detailed in the paper](https://arxiv.org/abs/2203.05437) #### Who are the source language producers? [Detailed in the paper](https://arxiv.org/abs/2203.05437) ### Annotations [More information needed] #### Annotation process [More information needed] #### Who are the annotators? [More information needed] ### Personal and Sensitive Information [More information needed] ## Considerations for Using the Data ### Social Impact of Dataset [More information needed] ### Discussion of Biases [More information needed] ### Other Known Limitations [More information needed] ## Additional Information ### Dataset Curators [More information needed] ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use any of the datasets, models or code modules, please cite the following paper: ``` @inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437", ``` ### Contributions [Detailed in the paper](https://arxiv.org/abs/2203.05437)
sasha/pii-oscar-sample
2022-03-10T18:35:33.000Z
[ "region:us" ]
sasha
null
null
null
0
3
Entry not found
lewtun/top_quark_tagging
2022-04-03T14:26:05.000Z
[ "license:cc-by-4.0", "region:us" ]
lewtun
Top Quark Tagging is a dataset of Monte Carlo simulated hadronic top and QCD dijet events for the evaluation of top quark tagging architectures. The dataset consists of 1.2M training events, 400k validation events and 400k test events.
@dataset{kasieczka_gregor_2019_2603256, author = {Kasieczka, Gregor and Plehn, Tilman and Thompson, Jennifer and Russel, Michael}, title = {Top Quark Tagging Reference Dataset}, month = mar, year = 2019, publisher = {Zenodo}, version = {v0 (2018\_03\_27)}, doi = {10.5281/zenodo.2603256}, url = {https://doi.org/10.5281/zenodo.2603256} }
null
0
3
--- license: cc-by-4.0 --- # Top Quark Tagging Reference Dataset A set of MC simulated training/testing events for the evaluation of top quark tagging architectures. In total 1.2M training events, 400k validation events and 400k test events. Use “train” for training, “val” for validation during the training and “test” for final testing and reporting results. ## Description * 14 TeV, hadronic tops for signal, qcd diets background, Delphes ATLAS detector card with Pythia8 * No MPI/pile-up included * Clustering of particle-flow entries (produced by Delphes E-flow) into anti-kT 0.8 jets in the pT range [550,650] GeV * All top jets are matched to a parton-level top within ∆R = 0.8, and to all top decay partons within 0.8 * Jets are required to have |eta| < 2 * The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200 * Constituents are sorted by pT, with the highest pT one first * The truth top four-momentum is stored as truth_px etc. * A flag (1 for top, 0 for QCD) is kept for each jet. It is called is_signal_new * The variable "ttv" (= test/train/validation) is kept for each jet. It indicates to which dataset the jet belongs. It is redundant as the different sets are already distributed as different files.
IIC/bioasq22_es
2022-10-23T05:18:18.000Z
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:Helsinki-NLP/opus-mt-en-es", "language:es", "region:us" ]
IIC
null
null
null
2
3
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - es multilinguality: - monolingual pretty_name: BIOASQ size_categories: - 100K<n<1M source_datasets: - Helsinki-NLP/opus-mt-en-es task_categories: - sequence-modeling task_ids: - language-modeling --- # BIOASQ 2022 Spanish This is an automatically translated version of the bioasq dataset, a dataset used for question answering in the biomedical domain. The translation was performed for the questions, answers and contexts using the [marianMT english-spanish](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) . As the translation process may return answers that are not 100% present in the context, we developed an algorithm based on sentence tokenization and intersection of the words present in the answer and in the portion of the context that we are evaluating, and then extracting the parragraph from the context that matches the answer. License, distribution and usage conditions of the original dataset apply. ### Contributions Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset.
josearangos/spanish-calls-corpus-Friends
2022-03-22T03:31:22.000Z
[ "region:us" ]
josearangos
null
null
null
0
3
Entry not found
sumedh/MeQSum
2022-03-24T20:20:43.000Z
[ "license:apache-2.0", "region:us" ]
sumedh
null
null
null
0
3
--- license: apache-2.0 --- - Problem type: Summarization languages: - en multilinguality: - monolingual task_ids: - summarization # MeQSum Dataset for medical question summarization introduced in the ACL 2019 paper "On the Summarization of Consumer Health Questions": https://www.aclweb.org/anthology/P19-1215 ### Citation Information ```bibtex @Inproceedings{MeQSum, author = {Asma {Ben Abacha} and Dina Demner-Fushman}, title = {On the Summarization of Consumer Health Questions}, booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28th - August 2}, year = {2019}, abstract = {Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language questions that are longer than needed and include peripheral information that increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000 summarized consumer health questions. We explore data augmentation methods and evaluate state-of-the-art neural abstractive models on this new task. In particular, we show that semantic augmentation from question datasets improves the overall performance, and that pointer-generator networks outperform sequence-to-sequence attentional models on this task, with a ROUGE-1 score of 44.16%. We also present a detailed error analysis and discuss directions for improvement that are specific to question summarization. }} ```
huggan/cityscapes
2022-04-12T13:56:44.000Z
[ "arxiv:1703.10593", "region:us" ]
huggan
null
null
null
0
3
This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/ # Citation ``` @article{DBLP:journals/corr/ZhuPIE17, author = {Jun{-}Yan Zhu and Taesung Park and Phillip Isola and Alexei A. Efros}, title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks}, journal = {CoRR}, volume = {abs/1703.10593}, year = {2017}, url = {http://arxiv.org/abs/1703.10593}, eprinttype = {arXiv}, eprint = {1703.10593}, timestamp = {Mon, 13 Aug 2018 16:48:06 +0200}, biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
hackathon-pln-es/Dataset-Acoso-Twitter-Es
2022-03-31T00:03:51.000Z
[ "license:gpl-3.0", "region:us" ]
hackathon-pln-es
null
null
null
2
3
--- license: gpl-3.0 languaje: - es --- # UNL: Universidad Nacional de Loja ### Miembros del equipo: - Anderson Quizhpe <br> - Luis Negrón <br> - David Pacheco <br> - Bryan Requenes <br> - Paul Pasaca <br><br>
laion/laion2B-multi-watermark
2022-03-29T22:50:20.000Z
[ "license:cc-by-4.0", "region:us" ]
laion
null
null
null
1
3
--- license: cc-by-4.0 ---
blo05/cleaned_wiki_en_0-20
2022-03-30T14:15:55.000Z
[ "region:us" ]
blo05
null
null
null
0
3
Entry not found
artemis13fowl/imdb
2022-03-30T15:35:39.000Z
[ "region:us" ]
artemis13fowl
null
null
null
0
3
Entry not found
hackathon-pln-es/disco_spanish_poetry
2022-03-30T21:50:28.000Z
[ "region:us" ]
hackathon-pln-es
null
null
null
8
3
# DISCO: Diachronic Spanish Sonnet Corpus [![DOI](https://zenodo.org/badge/103841064.svg)](https://zenodo.org/badge/latestdoi/103841064) The Diachronic Spanish Sonnet Corpus (DISCO) contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones. This is a CSV compilation taken from the plain text corpus v4 published on git https://github.com/pruizf/disco/tree/v4. It includes the title, author, age and text metadata. <br><br>
hackathon-pln-es/readability-es-hackathon-pln-public
2023-04-13T08:51:15.000Z
[ "task_categories:text-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:es", "license:cc-by-4.0", "readability", "region:us" ]
hackathon-pln-es
null
null
null
0
3
--- annotations_creators: - found language_creators: - found language: - es license: - cc-by-4.0 multilinguality: - monolingual size_categories: - unknown source_datasets: - original task_categories: - text-classification task_ids: [] pretty_name: readability-es-sentences tags: - readability --- # Dataset Card for [readability-es-sentences] ## Dataset Description Compilation of short Spanish articles for readability assessment. ### Dataset Summary This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources: - **Coh-Metrix-Esp corpus (Quispesaravia, et al., 2016):** collection of 100 parallel texts with simple and complex variants in Spanish. These texts include children's and adult stories to fulfill each category. - **[kwiziq](https://www.kwiziq.com/):** a language learner assistant - **[hablacultura.com](https://hablacultura.com/):** Spanish resources for students and teachers. We have downloaded the available content in their websites. ### Languages Spanish ## Dataset Structure The dataset includes 1019 text entries between 80 and 8714 characters long. The vast majority (97%) are below 4,000 characters long. ### Data Fields The dataset is formatted as a json lines and includes the following fields: - **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR). - **Level:** standardized readability level: complex or simple. - **Level-3:** standardized readability level: basic, intermediate or advanced - **Text:** original text formatted into sentences. Not all the entries contain usable values for `category`, `level` and `level-3`, but all of them should contain at least one of `level`, `level-3`. When the corresponding information could not be derived, we use the special `"N/A"` value to indicate so. ## Additional Information ### Licensing Information https://creativecommons.org/licenses/by-nc-sa/4.0/ ### Citation Information Please cite this page to give credit to the authors :) ### Team - [Laura Vásquez-Rodríguez](https://lmvasque.github.io/) - [Pedro Cuenca](https://twitter.com/pcuenq) - [Sergio Morales](https://www.fireblend.com/) - [Fernando Alva-Manchego](https://feralvam.github.io/)
huggan/smithsonian-butterfly-lowres
2022-04-06T19:57:24.000Z
[ "license:cc0-1.0", "region:us" ]
huggan
null
null
null
3
3
--- license: cc0-1.0 --- Collection of pinned butterfly images from the Smithsonian https://www.si.edu/spotlight/buginfo/butterfly Doesn't include metadata yet! Url pattern: "https://ids.si.edu/ids/deliveryService?max_w=550&id=ark:/65665/m3c70e17cf30314fd4ad86afa7d1ebf49f" Added sketch versions! sketch_pidinet is generated by : https://github.com/zhuoinoulu/pidinet sketch_pix2pix is generated by : https://github.com/mtli/PhotoSketch
ukr-models/Ukr-Synth
2023-08-31T09:35:43.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:parsing", "task_ids:part-of-speech", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "language:uk", "license:mit", "region:us" ]
ukr-models
Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.
null
null
8
3
--- annotations_creators: - machine-generated language_creators: - found language: - uk license: - mit multilinguality: - monolingual size_categories: - 1M<n<10M task_categories: - token-classification task_ids: - named-entity-recognition - parsing - part-of-speech pretty_name: Ukrainian synthetic dataset in conllu format --- # Dataset Card for Ukr-Synth ## Dataset Description ### Dataset Summary Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags. Represents a subsample of [Leipzig Corpora Collection for Ukrainian Language](https://wortschatz.uni-leipzig.de/en/download/Ukrainian). The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets. ### Languages Ukrainian ## Dataset Structure ### Data Splits | name |train |validation| |---------|-------:|---------:| |conll2003|1000000| 10000| ## Dataset Creation ### Source Data Leipzig Corpora Collection: D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages. In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012 ## Additional Information ### Licensing Information MIT License Copyright (c) 2022 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NLPC-UOM/Sinhala-News-Category-classification
2022-10-25T10:03:58.000Z
[ "task_categories:text-classification", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:si", "license:mit", "region:us" ]
NLPC-UOM
null
null
null
0
3
--- annotations_creators: [] language_creators: - crowdsourced language: - si license: - mit multilinguality: - monolingual pretty_name: sinhala-news-category-classification size_categories: - 1K<n<10K source_datasets: [] task_categories: - text-classification task_ids: [] --- This file contains news texts (sentences) belonging to 5 different news categories (political, business, technology, sports and Entertainment). The original dataset was released by Nisansa de Silva (*Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*). The original dataset is processed and cleaned of single word texts, English only sentences etc. If you use this dataset, please cite {*Nisansa de Silva, Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*} and {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*}
NLPC-UOM/Sinhala-News-Source-classification
2022-10-25T10:04:01.000Z
[ "task_categories:text-classification", "language_creators:crowdsourced", "multilinguality:monolingual", "language:si", "license:mit", "region:us" ]
NLPC-UOM
null
null
null
0
3
--- annotations_creators: [] language_creators: - crowdsourced language: - si license: - mit multilinguality: - monolingual pretty_name: sinhala-news-source-classification size_categories: [] source_datasets: [] task_categories: - text-classification task_ids: [] --- This dataset contains Sinhala news headlines extracted from 9 news sources (websites) (Sri Lanka Army, Dinamina, GossipLanka, Hiru, ITN, Lankapuwath, NewsLK, Newsfirst, World Socialist Web Site-Sinhala). This is a processed version of the corpus created by *Sachintha, D., Piyarathna, L., Rajitha, C., and Ranathunga, S. (2021). Exploiting parallel corpora to improve multilingual embedding based document and sentence alignment*. Single word sentences, invalid characters have been removed from the originally extracted corpus and also subsampled to handle class imbalance. If you use this dataset please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*}
huggingnft/etherbears
2022-04-16T17:59:07.000Z
[ "license:mit", "huggingnft", "nft", "huggan", "gan", "image", "images", "region:us" ]
huggingnft
null
null
null
0
3
--- tags: - huggingnft - nft - huggan - gan - image - images task: - unconditional-image-generation datasets: - huggingnft/etherbears license: mit --- # Dataset Card ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary NFT images dataset for unconditional generation. NFT collection available [here](https://opensea.io/collection/etherbears). Model is available [here](https://huggingface.co/huggingnft/etherbears). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingnft/etherbears") ``` ## Dataset Structure [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. - `image`: an `image` feature. - `id`: an `int` feature. - `token_metadata`: a `str` feature. - `image_original_url`: a `str` feature. ### Data Splits [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingnft, author={Aleksey Korshuk} year=2022 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft)