text-classification
bool
2 classes
text
stringlengths
0
664k
false
# Dataset Card for Ollie ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Ollie](https://knowitall.github.io/ollie/) - **Repository:** [Github](https://github.com/knowitall/ollie) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/D12-1048/) ### Dataset Summary The Ollie dataset includes two configs for the data used to train the Ollie informatation extraction algorithm, for 18M sentences and 3M sentences respectively. This data is for academic use only. From the authors: Ollie is a program that automatically identifies and extracts binary relationships from English sentences. Ollie is designed for Web-scale information extraction, where target relations are not specified in advance. Ollie is our second-generation information extraction system . Whereas ReVerb operates on flat sequences of tokens, Ollie works with the tree-like (graph with only small cycles) representation using Stanford's compression of the dependencies. This allows Ollie to capture expression that ReVerb misses, such as long-range relations. Ollie also captures context that modifies a binary relation. Presently Ollie handles attribution (He said/she believes) and enabling conditions (if X then). More information is available at the Ollie homepage: https://knowitall.github.io/ollie/ ### Supported Tasks and Leaderboards [More Information Needed] ### Languages en ## Dataset Structure ### Data Instances There are two configurations for the dataset: ollie_lemmagrep which are 18M sentences from web searches for a subset of the Reverb relationships (110,000 relationships), and the 3M sentences for ollie_patterned which is a subset of the ollie_lemmagrep dataset derived from patterns according to the Ollie paper. An example of an ollie_lemmagrep record: `` {'arg1': 'adobe reader', 'arg2': 'pdf', 'chunk': 'B-NP I-NP I-NP I-NP B-PP B-NP I-NP B-VP B-PP B-NP I-NP O B-VP B-NP I-NP I-NP I-NP B-VP I-VP I-VP O', 'pos': 'JJ NNS CC NNS IN PRP$ NN VBP IN NNP NN CC VB DT NNP NNP NNP TO VB VBN .', 'rel': 'be require to view', 'search_query': 'require reader pdf adobe view', 'sentence': 'Many documents and reports on our site are in PDF format and require the Adobe Acrobat Reader to be viewed .', 'sentence_cnt': '9', 'words': 'many,document,and,report,on,our,site,be,in,pdf,format,and,require,the,adobe,acrobat,reader,to,be,view'} `` An example of an ollie_patterned record: `` {'arg1': 'english', 'arg2': 'internet', 'parse': '(in_IN_6), advmod(important_JJ_4, most_RBS_3); nsubj(language_NN_5, English_NNP_0); cop(language_NN_5, being_VBG_1); det(language_NN_5, the_DT_2); amod(language_NN_5, important_JJ_4); prep_in(language_NN_5, era_NN_9); punct(language_NN_5, ,_,_10); conj(language_NN_5, education_NN_12); det(era_NN_9, the_DT_7); nn(era_NN_9, Internet_NNP_8); amod(education_NN_12, English_JJ_11); nsubjpass(enriched_VBN_15, language_NN_5); aux(enriched_VBN_15, should_MD_13); auxpass(enriched_VBN_15, be_VB_14); punct(enriched_VBN_15, ._._16)', 'pattern': '{arg1} <nsubj< {rel:NN} >prep_in> {slot0:NN} >nn> {arg2}', 'rel': 'be language of', 'search_query': 'english language internet', 'sentence': 'English being the most important language in the Internet era , English education should be enriched .', 'slot0': 'era'} `` ### Data Fields For ollie_lemmagrep: * rel: the relationship phrase/verb phrase. This may be empty, which represents the "be" relationship. * arg1: the first argument in the relationship * arg2: the second argument in the relationship. * chunk: a tag of each token in the sentence, showing the pos chunks * pos: part of speech tagging of the sentence * sentence: the sentence * sentence_cnt: the number of copies of this sentence encountered * search_query: a combintion of rel, arg1, arg2 * words: the lemma of the words of the sentence separated by commas For ollie_patterned: * rel: the relationship phrase/verb phrase. * arg1: the first argument in the relationship * arg2: the second argument in the relationship. * slot0: the third argument in the relationship, which might be empty. * pattern: a parse pattern for the relationship * parse: a dependency parse forthe sentence * search_query: a combintion of rel, arg1, arg2 * sentence: the senence ### Data Splits There are no splits. ## Dataset Creation ### Curation Rationale This dataset was created as part of research on open information extraction. ### Source Data #### Initial Data Collection and Normalization See the research paper on OLlie. The training data is extracted from web pages (Cluebweb09). #### Who are the source language producers? The Ollie authors at the Univeristy of Washington and data from Cluebweb09 and the open web. ### Annotations #### Annotation process The various parsers and code from the Ollie alogrithm. #### Who are the annotators? Machine annotated. ### Personal and Sensitive Information Unkown, but likely there are names of famous individuals. ## Considerations for Using the Data ### Social Impact of Dataset The goal for the work is to help machines learn to extract information form open domains. ### Discussion of Biases Since the data is gathered from the web, there is likely to be biased text and relationships. [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The authors of Ollie at The University of Washington ### Licensing Information The University of Washington academic license: https://raw.githubusercontent.com/knowitall/ollie/master/LICENSE ### Citation Information ``` @inproceedings{ollie-emnlp12, author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni}, title = {Open Language Learning for Information Extraction}, booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)}, year = {2012} } ``` ### Contributions Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset.
true
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://clarin-pl.eu/dspace/handle/11321/710 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The PolEmo2.0 is a set of online reviews from medicine and hotels domains. The task is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sentence: string, the review - target: sentiment of the sentence class The same tag system is used in plWordNet Emo for lexical units: [+m] (strong positive), [+s] (weak positive), [-m] (strong negative), [-s] (weak negative), [amb] (ambiguous) and [0] (neutral). Note that the test set doesn't have targets so -1 is used instead ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information [More Information Needed] ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
false
# Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
false
# Dataset Card for "code_x_glue_cc_code_completion_token" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token ### Dataset Summary CodeXGLUE CodeCompletion-token dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token Predict next code token given context of previous tokens. Models are evaluated by token level accuracy. Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types. ### Supported Tasks and Leaderboards - `language-modeling`: The dataset can be used to train a model for completing single code tokens. ### Languages - Java **programming** language - Python **programming** language ## Dataset Structure ### Data Instances #### java An example of 'test' looks as follows. ``` { "code": ["<s>", "package", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "demo", ";", "import", "java", ".", "io", ".", "BufferedReader", ";", "import", "java", ".", "io", ".", "ByteArrayInputStream", ";", "import", "java", ".", "io", ".", "IOException", ";", "import", "java", ".", "io", ".", "InputStreamReader", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "Clara", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "inflater", ".", "LayoutInflaterException", ";", "import", "com", ".", "vaadin", ".", "Application", ";", "import", "com", ".", "vaadin", ".", "terminal", ".", "ThemeResource", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ".", "ClickEvent", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Component", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Embedded", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalSplitPanel", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "TextArea", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "VerticalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ".", "Notification", ";", "@", "SuppressWarnings", "(", "\"serial\"", ")", "public", "class", "DemoApplication", "extends", "Application", "{", "private", "DemoController", "controller", ";", "private", "TextArea", "xmlArea", ";", "private", "HorizontalSplitPanel", "split", "=", "new", "HorizontalSplitPanel", "(", ")", ";", "private", "Window", "mainWindow", ";", "@", "Override", "public", "void", "init", "(", ")", "{", "setTheme", "(", "\"clara\"", ")", ";", "setMainWindow", "(", "mainWindow", "=", "new", "Window", "(", ")", ")", ";", "controller", "=", "new", "DemoController", "(", "mainWindow", ")", ";", "mainWindow", ".", "setContent", "(", "split", ")", ";", "VerticalLayout", "editor", "=", "new", "VerticalLayout", "(", ")", ";", "editor", ".", "setSpacing", "(", "true", ")", ";", "editor", ".", "setMargin", "(", "false", ",", "false", ",", "false", ",", "true", ")", ";", "editor", ".", "setHeight", "(", "\"100%\"", ")", ";", "editor", ".", "addComponent", "(", "xmlArea", "=", "createXmlArea", "(", ")", ")", ";", "editor", ".", "setExpandRatio", "(", "xmlArea", ",", "1.0f", ")", ";", "editor", ".", "addComponent", "(", "createUpdateButton", "(", ")", ")", ";", "HorizontalLayout", "wrapper", "=", "new", "HorizontalLayout", "(", ")", ";", "wrapper", ".", "setMargin", "(", "true", ")", ";", "wrapper", ".", "setSizeFull", "(", ")", ";", "wrapper", ".", "addComponent", "(", "createLogo", "(", ")", ")", ";", "wrapper", ".", "addComponent", "(", "editor", ")", ";", "wrapper", ".", "setExpandRatio", "(", "editor", ",", "1.0f", ")", ";", "split", ".", "setFirstComponent", "(", "wrapper", ")", ";", "updateLayout", "(", ")", ";", "}", "private", "Component", "createLogo", "(", ")", "{", "Embedded", "logo", "=", "new", "Embedded", "(", "null", ",", "new", "ThemeResource", "(", "\"\"", ")", ")", ";", "logo", ".", "setHeight", "(", "\"90px\"", ")", ";", "logo", ".", "setWidth", "(", "\"90px\"", ")", ";", "return", "logo", ";", "}", "private", "TextArea", "createXmlArea", "(", ")", "{", "TextArea", "area", "=", "new", "TextArea", "(", ")", ";", "area", ".", "setStyleName", "(", "\"xml-area\"", ")", ";", "area", ".", "setSizeFull", "(", ")", ";", "area", ".", "setValue", "(", "readStartingPoint", "(", ")", ")", ";", "return", "area", ";", "}", "private", "Button", "createUpdateButton", "(", ")", "{", "return", "new", "Button", "(", "\"Update\"", ",", "new", "Button", ".", "ClickListener", "(", ")", "{", "public", "void", "buttonClick", "(", "ClickEvent", "event", ")", "{", "updateLayout", "(", ")", ";", "}", "}", ")", ";", "}", "private", "String", "readStartingPoint", "(", ")", "{", "BufferedReader", "reader", "=", "null", ";", "try", "{", "reader", "=", "new", "BufferedReader", "(", "new", "InputStreamReader", "(", "getClass", "(", ")", ".", "getClassLoader", "(", ")", ".", "getResourceAsStream", "(", "\"\"", ")", ")", ")", ";", "StringBuilder", "xml", "=", "new", "StringBuilder", "(", ")", ";", "String", "line", ";", "while", "(", "(", "line", "=", "reader", ".", "readLine", "(", ")", ")", "!=", "null", ")", "{", "xml", ".", "append", "(", "line", ")", ";", "xml", ".", "append", "(", "\"n\"", ")", ";", "}", "return", "xml", ".", "toString", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "finally", "{", "if", "(", "reader", "!=", "null", ")", "{", "try", "{", "reader", ".", "close", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "}", "}", "return", "null", ";", "}", "private", "void", "updateLayout", "(", ")", "{", "try", "{", "Component", "c", "=", "Clara", ".", "create", "(", "new", "ByteArrayInputStream", "(", "xmlArea", ".", "getValue", "(", ")", ".", "toString", "(", ")", ".", "getBytes", "(", ")", ")", ",", "controller", ")", ";", "split", ".", "replaceComponent", "(", "split", ".", "getSecondComponent", "(", ")", ",", "c", ")", ";", "}", "catch", "(", "LayoutInflaterException", "e", ")", "{", "mainWindow", ".", "showNotification", "(", "e", ".", "getMessage", "(", ")", ",", "Notification", ".", "TYPE_ERROR_MESSAGE", ")", ";", "}", "}", "}", "</s>"], "id": 0 } ``` #### python An example of 'train' looks as follows. ``` { "code": ["<s>", "from", "bootstrap", "import", "Bootstrap", "<EOL>", "from", "fund", "import", "InstantPaymentNotificationHandler", "<EOL>", "from", "fund", "import", "ThankYouHandler", "<EOL>", "from", "view", "import", "*", "<EOL>", "mapping", "=", "[", "(", "<EOL>", "r'/'", ",", "<EOL>", "Index", "<EOL>", ")", ",", "(", "<EOL>", "r'/ipn'", ",", "<EOL>", "InstantPaymentNotificationHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/thank-you'", ",", "<EOL>", "ThankYouHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/about\\/?'", ",", "<EOL>", "About", "<EOL>", ")", ",", "(", "<EOL>", "r'/guide\\/?'", ",", "<EOL>", "Guide", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Download", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Standards", "<EOL>", ")", ",", "(", "<EOL>", "r'/community\\/?'", ",", "<EOL>", "Community", "<EOL>", ")", ",", "(", "<EOL>", "r'/news\\/?'", ",", "<EOL>", "News", "<EOL>", ")", ",", "(", "<EOL>", "r'/support\\/?'", ",", "<EOL>", "Support", "<EOL>", ")", ",", "(", "<EOL>", "r'/contact\\/?'", ",", "<EOL>", "Contact", "<EOL>", ")", ",", "(", "<EOL>", "r'/press\\/?'", ",", "<EOL>", "Press", "<EOL>", ")", ",", "(", "<EOL>", "r'/legal/terms'", ",", "<EOL>", "Terms", "<EOL>", ")", ",", "(", "<EOL>", "r'/library\\/?'", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Users", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "User", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectSuccess", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectError", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectAfterDelete", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Moderate", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Bootstrap", "<EOL>", ")", ",", "(", "<EOL>", "r'/activity'", ",", "<EOL>", "ActivityScreen", "<EOL>", ")", ",", "(", "<EOL>", "r'/txns'", ",", "<EOL>", "TxnList", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "MessageStrings", "<EOL>", ")", ",", "(", "<EOL>", "r'/.*'", ",", "<EOL>", "NotFound", "<EOL>", ")", "<EOL>", "]", "</s>"], "id": 0, "path": "00/wikihouse/urls.py\n" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### java |field name| type | description | |----------|----------------|--------------------| |id |int32 | Index of the sample| |code |Sequence[string]| Code Tokens | #### python |field name| type | description | |----------|----------------|-----------------------------| |id |int32 | Index of the sample | |path |string | Original path in the dataset| |code |Sequence[string]| Code Tokens | ### Data Splits #### java | |train|validation|test| |----|----:|---------:|---:| |java|12934| 7189|8268| #### python | |train |test | |------|-----:|----:| |python|100000|50000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{raychev2016probabilistic, title={Probabilistic Model for Code with Decision Trees}, author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin}, journal={ACM SIGPLAN Notices}, pages={731--747}, year={2016}, publisher={ACM New York, NY, USA} } @inproceedings{allamanis2013mining, title={Mining Source Code Repositories at Massive Scale using Language Modeling}, author={Allamanis, Miltiadis and Sutton, Charles}, booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)}, pages={207--216}, year={2013}, organization={IEEE} } ``` The data for "java" configuration comes from: ``` @dataset{rafael_michael_karampatsis_2020_3628665, author = {Rafael - Michael Karampatsis and Hlib Babii and Romain Robbes and Charles Sutton and Andrea Janes}, title = {Preprocessed Java Code Corpus}, month = jan, year = 2020, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.3628665}, url = {https://doi.org/10.5281/zenodo.3628665} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
false
# Dataset Card for [Needs More Information] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/m3hrdadfi/wiki-summary - **Repository:** https://github.com/m3hrdadfi/wiki-summary - **Paper:** [More Information Needed] - **Leaderboard:** [More Information Needed] - **Point of Contact:** [Mehrdad Farahani](mailto:m3hrdadphi@gmail.com) ### Dataset Summary The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles' length (only version 1.0.0) and highlights' length to a maximum of 512 and 128, respectively, suitable for parsBERT. This dataset is created to achieve state-of-the-art results on some interesting NLP tasks like Text Summarization. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in Percy. ## Dataset Structure ### Data Instances ``` { 'id' :'0598cfd2ac491a928615945054ab7602034a8f4f', 'link': 'https://fa.wikipedia.org/wiki/انقلاب_1917_روسیه', 'title': 'انقلاب 1917 روسیه', 'article': 'نخست انقلاب فوریه ۱۹۱۷ رخ داد . در این انقلاب پس از یک‌سری اعتصابات ، تظاهرات و درگیری‌ها ، نیکولای دوم ، آخرین تزار روسیه از سلطنت خلع شد و یک دولت موقت به قدرت رسید . دولت موقت زیر نظر گئورگی لووف و الکساندر کرنسکی تشکیل شد . اکثر اعضای دولت موقت ، از شاخه منشویک حزب سوسیال دموکرات کارگری روسیه بودند . دومین مرحله ، انقلاب اکتبر ۱۹۱۷ بود . انقلاب اکتبر ، تحت نظارت حزب بلشویک (شاخه رادیکال از حزب سوسیال دموکرات کارگری روسیه) و به رهبری ولادیمیر لنین به پیش رفت و طی یک یورش نظامی همه‌جانبه به کاخ زمستانی سن پترزبورگ و سایر اماکن مهم ، قدرت را از دولت موقت گرفت . در این انقلاب افراد بسیار کمی کشته شدند . از زمان شکست روسیه در جنگ ۱۹۰۵ با ژاپن ، اوضاع بد اقتصادی ، گرسنگی ، عقب‌ماندگی و سرمایه‌داری و نارضایتی‌های گوناگون در بین مردم ، سربازان ، کارگران ، کشاورزان و نخبگان روسیه به‌وجود آمده‌بود . سرکوبهای تزار و ایجاد مجلس دوما نظام مشروطه حاصل آن دوران است . حزب سوسیال دموکرات ، اصلی‌ترین معترض به سیاست‌های نیکلای دوم بود که به‌طور گسترده بین دهقانان کشاورزان و کارگران کارخانجات صنعتی علیه سیاست‌های سیستم تزار فعالیت داشت . در اوت ۱۹۱۴ میلادی ، امپراتوری روسیه به دستور تزار وقت و به منظور حمایت از اسلاوهای صربستان وارد جنگ جهانی اول در برابر امپراتوری آلمان و امپراتوری اتریش-مجارستان شد . نخست فقط بلشویک‌ها ، مخالف ورود روسیه به این جنگ بودند و می‌گفتند که این جنگ ، سبب بدتر شدن اوضاع نابسامان اقتصادی و اجتماعی روسیه خواهد شد . در سال ۱۹۱۴ میلادی ، یعنی در آغاز جنگ جهانی اول ، روسیه بزرگترین ارتش جهان را داشت ، حدود ۱۲ میلیون سرباز و ۶ میلیون سرباز ذخیره ؛ ولی در پایان سال ۱۹۱۶ میلادی ، پنج میلیون نفر از سربازان روسیه کشته ، زخمی یا اسیر شده بودند . حدود دو میلیون سرباز نیز محل خدمت خود را ترک کرده و غالبا با اسلحه به شهر و دیار خود بازگشته بودند . در میان ۱۰ یا ۱۱ میلیون سرباز باقی‌مانده نیز ، اعتبار تزار و سلسله مراتب ارتش و اتوریته افسران بالا دست از بین رفته بود . عوامل نابسامان داخلی اعم از اجتماعی کشاورزی و فرماندهی نظامی در شکستهای روسیه بسیار مؤثر بود . شکست‌های روسیه در جنگ جهانی اول ، حامیان نیکلای دوم در روسیه را به حداقل خود رساند . در اوایل فوریه ۱۹۱۷ میلادی اکثر کارگران صنعتی در پتروگراد و مسکو دست به اعتصاب زدند . سپس شورش به پادگان‌ها و سربازان رسید . اعتراضات دهقانان نیز گسترش یافت . سوسیال دموکرات‌ها هدایت اعتراضات را در دست گرفتند . در ۱۱ مارس ۱۹۱۷ میلادی ، تزار وقت روسیه ، نیکلای دوم ، فرمان انحلال مجلس روسیه را صادر کرد ، اما اکثر نمایندگان مجلس متفرق نشدند و با تصمیمات نیکلای دوم مخالفت کردند . سرانجام در پی تظاهرات گسترده کارگران و سپس نافرمانی سربازان در سرکوب تظاهرکنندگان در پتروگراد ، نیکلای دوم از مقام خود استعفا داد . بدین ترتیب حکم‌رانی دودمان رومانوف‌ها بر روسیه پس از حدود سیصد سال پایان یافت .', 'highlights': 'انقلاب ۱۹۱۷ روسیه ، جنبشی اعتراضی ، ضد امپراتوری روسیه بود که در سال ۱۹۱۷ رخ داد و به سرنگونی حکومت تزارها و برپایی اتحاد جماهیر شوروی انجامید . مبانی انقلاب بر پایه صلح-نان-زمین استوار بود . این انقلاب در دو مرحله صورت گرفت : در طول این انقلاب در شهرهای اصلی روسیه همانند مسکو و سن پترزبورگ رویدادهای تاریخی برجسته‌ای رخ داد . انقلاب در مناطق روستایی و رعیتی نیز پا به پای مناطق شهری در حال پیشروی بود و دهقانان زمین‌ها را تصرف کرده و در حال بازتوزیع آن در میان خود بودند .' } ``` ### Data Fields - `id`: Article id - `link`: Article link - `title`: Title of the article - `article`: Full text content in the article - `highlights`: Summary of the article ### Data Splits | Train | Test | Validation | |-------------|-------------|-------------| | 45,654 | 5,638 | 5,074 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process No annotations. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by Mehrdad Farahani. ### Licensing Information [Apache License 2.0](https://github.com/m3hrdadfi/wiki-summary/blob/master/LICENSE) ### Citation Information ``` @misc{Bert2BertWikiSummaryPersian, author = {Mehrdad Farahani}, title = {Summarization using Bert2Bert model on WikiSummary dataset}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {https://github.com/m3hrdadfi/wiki-summary}, } ``` ### Contributions Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset.
true
# Dataset Card for 20_Newsgroups_Fixed ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Galileo Homepage:** [Galileo ML Data Intelligence Platform](https://www.rungalileo.io) - **Repository:** [Needs More Information] - **Dataset Blog:** [Improving Your ML Datasets With Galileo, Part 1](https://www.rungalileo.io/blog/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] - **Sklearn Dataset:** [sklearn](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#the-20-newsgroups-text-dataset) - **20 Newsgroups Homepage:** [newsgroups homepage](http://qwone.com/~jason/20Newsgroups/) ### Dataset Summary This dataset is a version of the [**20 Newsgroups**](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#the-20-newsgroups-text-dataset) dataset fixed with the help of the [**Galileo ML Data Intelligence Platform**](https://www.rungalileo.io/). In a matter of minutes, Galileo enabled us to uncover and fix a multitude of errors within the original dataset. In the end, we present this improved dataset as a new standard for natural language experimentation and benchmarking using the Newsgroups dataset. ### Curation Rationale This dataset was created to showcase the power of Galileo as a Data Intelligence Platform. Through Galileo, we identify critical error patterns within the original Newsgroups training dataset - garbage data that do not properly fit any newsgroup label category. Moreover, we observe that these errors permeate throughout the test dataset. As a result of our analysis, we propose the addition of a new class to properly categorize and fix the labeling of garbage data samples: a "None" class. Galileo further enables us to quickly make these data sample changes within the training set (changing garbage data labels to None) and helps guide human re-annotation of the test set. #### Total Dataset Errors Fixed: 1163 *(6.5% of the dataset)* |Errors / Split. |Overall| Train| Test| |---------------------|------:|---------:|---------:| |Garbage samples fixed| 718| 396| 322| |Empty samples fixed | 445| 254| 254| |Total samples fixed | 1163| 650| 650| To learn more about the process of fixing this dataset, please refer to our [**Blog**](https://www.rungalileo.io/blog). ## Dataset Structure ### Data Instances For each data sample, there is the text of the newsgroup post, the corresponding newsgroup forum where the message was posted (label), and a data sample id. An example from the dataset looks as follows: ``` {'id': 1, 'text': 'I have win 3.0 and downloaded several icons and BMP\'s but I can\'t figure out\nhow to change the "wallpaper" or use the icons. Any help would be appreciated.\n\n\nThanx,\n\n-Brando' 'label': comp.os.ms-windows.misc} ``` ### Data Fields - id: the unique numerical id associated with a data sample - text: a string containing the text of the newsgroups message - label: a string indicating the newsgroup forum where the sample was posted ### Data Splits The data is split into a training and test split. To reduce bias and test generalizability across time, data samples are split between train and test depending upon whether their message was posted before or after a specific date, respectively. ### Data Classes The fixed data is organized into 20 newsgroup topics + a catch all "None" class. Some of the newsgroups are very closely related to each other (e.g. comp.sys.ibm.pc.hardware / comp.sys.mac.hardware), while others are highly unrelated (e.g misc.forsale / soc.religion.christian). Here is a list of the 21 classes, partitioned according to subject matter: | comp.graphics<br>comp.os.ms-windows.misc<br>comp.sys.ibm.pc.hardware<br>comp.sys.mac.hardware<br>comp.windows.x | rec.autos<br>rec.motorcycles<br>rec.sport.baseball<br>rec.sport.hockey | sci.crypt<br><sci.electronics<br>sci.med<br>sci.space | |:---|:---:|---:| | misc.forsale | talk.politics.misc<br>talk.politics.guns<br>talk.politics.mideast | talk.religion.misc<br>alt.atheism<br>soc.religion.christian | | None |
false
# Dataset Card for xP3 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/bigscience-workshop/xmtf - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co) ### Dataset Summary > xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. - **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility. - **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3)) - **xP3 Dataset Family:** <table> <tr> <th>Name</th> <th>Explanation</th> <th>Example models</th> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t> <td>Mixture of 17 tasks in 277 languages with English prompts</td> <td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t> <td>Mixture of 13 training tasks in 46 languages with English prompts</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t> <td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t> <td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td> <td></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t> <td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t> <td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> </tr> </table> ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?", "targets": "Yes" } ``` ### Data Fields The data fields are the same among all splits: - `inputs`: the natural language input fed to the model - `targets`: the natural language target that the model has to generate ### Data Splits The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. Adding a new language is very simple, you can take [this script adding Russian](https://huggingface.co/datasets/bs-la/xP3ru/blob/main/xp3_ru.py) as an example. |Language|Kilobytes|%|Samples|%| |--------|------:|-:|---:|-:| |tw|106288|0.11|265071|0.34| |bm|107056|0.11|265180|0.34| |ak|108096|0.11|265071|0.34| |eu|108112|0.11|269973|0.34| |ca|110608|0.12|271191|0.34| |fon|113072|0.12|265063|0.34| |st|114080|0.12|265063|0.34| |ki|115040|0.12|265180|0.34| |tum|116032|0.12|265063|0.34| |wo|122560|0.13|365063|0.46| |ln|126304|0.13|365060|0.46| |as|156256|0.16|265063|0.34| |or|161472|0.17|265063|0.34| |kn|165456|0.17|265063|0.34| |ml|175040|0.18|265864|0.34| |rn|192992|0.2|318189|0.4| |nso|229712|0.24|915051|1.16| |tn|235536|0.25|915054|1.16| |lg|235936|0.25|915021|1.16| |rw|249360|0.26|915043|1.16| |ts|250256|0.26|915044|1.16| |sn|252496|0.27|865056|1.1| |xh|254672|0.27|915058|1.16| |zu|263712|0.28|915061|1.16| |ny|272128|0.29|915063|1.16| |ig|325232|0.34|950097|1.2| |yo|352784|0.37|918416|1.16| |ne|393680|0.41|315754|0.4| |pa|523248|0.55|339210|0.43| |gu|560688|0.59|347499|0.44| |sw|560896|0.59|1114455|1.41| |mr|666240|0.7|417269|0.53| |bn|832720|0.88|428843|0.54| |ta|924496|0.97|410633|0.52| |te|1332912|1.4|573364|0.73| |ur|1918272|2.02|855756|1.08| |vi|3101408|3.27|1667306|2.11| |code|4330752|4.56|2707724|3.43| |hi|4393696|4.63|1543441|1.96| |zh|4589904|4.83|3560556|4.51| |id|4606288|4.85|2627392|3.33| |ar|4677264|4.93|2148955|2.72| |fr|5546688|5.84|5055942|6.41| |pt|6129584|6.46|3562772|4.52| |es|7571808|7.98|5151349|6.53| |en|37261104|39.25|31495184|39.93| |total|94941936|100.0|78883588|100.0| ## Dataset Creation ### Source Data #### Training datasets - Code Miscellaneous - [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex) - [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus) - [GreatCode](https://huggingface.co/datasets/great_code) - [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes) - Closed-book QA - [Hotpot QA](https://huggingface.co/datasets/hotpot_qa) - [Trivia QA](https://huggingface.co/datasets/trivia_qa) - [Web Questions](https://huggingface.co/datasets/web_questions) - [Wiki QA](https://huggingface.co/datasets/wiki_qa) - Extractive QA - [Adversarial QA](https://huggingface.co/datasets/adversarial_qa) - [CMRC2018](https://huggingface.co/datasets/cmrc2018) - [DRCD](https://huggingface.co/datasets/clue) - [DuoRC](https://huggingface.co/datasets/duorc) - [MLQA](https://huggingface.co/datasets/mlqa) - [Quoref](https://huggingface.co/datasets/quoref) - [ReCoRD](https://huggingface.co/datasets/super_glue) - [ROPES](https://huggingface.co/datasets/ropes) - [SQuAD v2](https://huggingface.co/datasets/squad_v2) - [xQuAD](https://huggingface.co/datasets/xquad) - TyDI QA - [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary) - [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp) - Multiple-Choice QA - [ARC](https://huggingface.co/datasets/ai2_arc) - [C3](https://huggingface.co/datasets/c3) - [CoS-E](https://huggingface.co/datasets/cos_e) - [Cosmos](https://huggingface.co/datasets/cosmos) - [DREAM](https://huggingface.co/datasets/dream) - [MultiRC](https://huggingface.co/datasets/super_glue) - [OpenBookQA](https://huggingface.co/datasets/openbookqa) - [PiQA](https://huggingface.co/datasets/piqa) - [QUAIL](https://huggingface.co/datasets/quail) - [QuaRel](https://huggingface.co/datasets/quarel) - [QuaRTz](https://huggingface.co/datasets/quartz) - [QASC](https://huggingface.co/datasets/qasc) - [RACE](https://huggingface.co/datasets/race) - [SciQ](https://huggingface.co/datasets/sciq) - [Social IQA](https://huggingface.co/datasets/social_i_qa) - [Wiki Hop](https://huggingface.co/datasets/wiki_hop) - [WiQA](https://huggingface.co/datasets/wiqa) - Paraphrase Identification - [MRPC](https://huggingface.co/datasets/super_glue) - [PAWS](https://huggingface.co/datasets/paws) - [PAWS-X](https://huggingface.co/datasets/paws-x) - [QQP](https://huggingface.co/datasets/qqp) - Program Synthesis - [APPS](https://huggingface.co/datasets/codeparrot/apps) - [CodeContests](https://huggingface.co/datasets/teven/code_contests) - [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) - [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp) - [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search) - [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code) - Structure-to-text - [Common Gen](https://huggingface.co/datasets/common_gen) - [Wiki Bio](https://huggingface.co/datasets/wiki_bio) - Sentiment - [Amazon](https://huggingface.co/datasets/amazon_polarity) - [App Reviews](https://huggingface.co/datasets/app_reviews) - [IMDB](https://huggingface.co/datasets/imdb) - [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes) - [Yelp](https://huggingface.co/datasets/yelp_review_full) - Simplification - [BiSECT](https://huggingface.co/datasets/GEM/BiSECT) - Summarization - [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail) - [Gigaword](https://huggingface.co/datasets/gigaword) - [MultiNews](https://huggingface.co/datasets/multi_news) - [SamSum](https://huggingface.co/datasets/samsum) - [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua) - [XLSum](https://huggingface.co/datasets/GEM/xlsum) - [XSum](https://huggingface.co/datasets/xsum) - Topic Classification - [AG News](https://huggingface.co/datasets/ag_news) - [DBPedia](https://huggingface.co/datasets/dbpedia_14) - [TNEWS](https://huggingface.co/datasets/clue) - [TREC](https://huggingface.co/datasets/trec) - [CSL](https://huggingface.co/datasets/clue) - Translation - [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200) - [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt) - Word Sense disambiguation - [WiC](https://huggingface.co/datasets/super_glue) - [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic) #### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI datasets & HumanEval) - Natural Language Inference (NLI) - [ANLI](https://huggingface.co/datasets/anli) - [CB](https://huggingface.co/datasets/super_glue) - [RTE](https://huggingface.co/datasets/super_glue) - [XNLI](https://huggingface.co/datasets/xnli) - Coreference Resolution - [Winogrande](https://huggingface.co/datasets/winogrande) - [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd) - Program Synthesis - [HumanEval](https://huggingface.co/datasets/openai_humaneval) - Sentence Completion - [COPA](https://huggingface.co/datasets/super_glue) - [Story Cloze](https://huggingface.co/datasets/story_cloze) - [XCOPA](https://huggingface.co/datasets/xcopa) - [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze) ## Additional Information ### Licensing Information The dataset is released under Apache 2.0. ### Citation Information ```bibtex @article{muennighoff2022crosslingual, title={Crosslingual generalization through multitask finetuning}, author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others}, journal={arXiv preprint arXiv:2211.01786}, year={2022} } ``` ### Contributions Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset.
false
This repo is the unofficial FeTA-QA dataset from paper [FeTaQA: Free-form Table Question Answering](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00446/109273/FeTaQA-Free-form-Table-Question-Answering). The original purpose to make it easier for users to download and use dataset. All the data is publicly avaliable on [their offical Github site](https://github.com/Yale-LILY/FeTaQA) If there is anything wrong, please raise an issue in the community and I will fix it if I am available.
false
# Dataset Card for "tner/tweetner7" ## Dataset Description - **Repository:** [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper) - **Paper:** [https://arxiv.org/abs/2210.03797](https://arxiv.org/abs/2210.03797) - **Dataset:** TweetNER7 - **Domain:** Twitter - **Number of Entity:** 7 ### Dataset Summary This is the official repository of TweetNER7 (["Named Entity Recognition in Twitter: A Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022"](https://arxiv.org/abs/2210.03797)), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021. The tweet collection used in TweetNER7 is same as what used in [TweetTopic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too. - Entity Types: `corperation`, `creative_work`, `event`, `group`, `location`, `product`, `person` ### Preprocessing We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`. For verified usernames, we replace its display name (or account name) with symbols `{@}`. For example, a tweet ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek ``` is transformed into the following text. ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}} ``` A simple function to format tweet follows below. ```python import re from urlextract import URLExtract extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek""" target_format = format_tweet(target) print(target_format) 'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}' ``` We ask annotators to ignore those special tokens but label the verified users' mentions. ### Data Split | split | number of instances | description | |:------------------|------:|------:| | train_2020 | 4616 | training dataset from September 2019 to August 2020 | | train_2021 | 2495 | training dataset from September 2020 to August 2021 | | train_all | 7111 | combined training dataset of `train_2020` and `train_2021` | | validation_2020 | 576 | validation dataset from September 2019 to August 2020 | | validation_2021 | 310 | validation dataset from September 2020 to August 2021 | | test_2020 | 576 | test dataset from September 2019 to August 2020 | | test_2021 | 2807 | test dataset from September 2020 to August 2021 | | train_random | 4616 | randomly sampled training dataset with the same size as `train_2020` from `train_all` | | validation_random | 576 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` | | extra_2020 | 87880 | extra tweet without annotations from September 2019 to August 2020 | | extra_2021 | 93594 | extra tweet without annotations from September 2020 to August 2021 | For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`. In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`. ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tokens': ['Morning', '5km', 'run', 'with', '{{USERNAME}}', 'for', 'breast', 'cancer', 'awareness', '#', 'pinkoctober', '#', 'breastcancerawareness', '#', 'zalorafit', '#', 'zalorafitxbnwrc', '@', 'The', 'Central', 'Park', ',', 'Desa', 'Parkcity', '{{URL}}'], 'tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 14, 2, 14, 14, 14, 14, 14, 14, 4, 11, 11, 11, 11, 14], 'id': '1183344337016381440', 'date': '2019-10-13' } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweetner7/raw/main/dataset/label.json). ```python { "B-corporation": 0, "B-creative_work": 1, "B-event": 2, "B-group": 3, "B-location": 4, "B-person": 5, "B-product": 6, "I-corporation": 7, "I-creative_work": 8, "I-event": 9, "I-group": 10, "I-location": 11, "I-person": 12, "I-product": 13, "O": 14 } ``` ## Models See full evaluation metrics [here](https://github.com/asahi417/tner/blob/master/MODEL_CARD.md#models-for-tweetner7). ### Main Models | Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) | |:--------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:| | [`tner/roberta-large-tweetner7-all`](https://huggingface.co/tner/roberta-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.75 | 61.25 | | [`tner/roberta-base-tweetner7-all`](https://huggingface.co/tner/roberta-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.16 | 60.81 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.68 | 61 | | [`tner/twitter-roberta-base-dec2020-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.26 | 60.7 | | [`tner/bertweet-large-tweetner7-all`](https://huggingface.co/tner/bertweet-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.46 | 61.87 | | [`tner/bertweet-base-tweetner7-all`](https://huggingface.co/tner/bertweet-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.36 | 60.52 | | [`tner/bert-large-tweetner7-all`](https://huggingface.co/tner/bert-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.58 | 59 | | [`tner/bert-base-tweetner7-all`](https://huggingface.co/tner/bert-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 62.3 | 57.59 | | [`tner/roberta-large-tweetner7-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.02 | 60.9 | | [`tner/roberta-base-tweetner7-continuous`](https://huggingface.co/tner/roberta-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.47 | 60.01 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.87 | 61.07 | | [`tner/twitter-roberta-base-dec2020-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.51 | 60.57 | | [`tner/bertweet-large-tweetner7-continuous`](https://huggingface.co/tner/bertweet-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.41 | 61.66 | | [`tner/bertweet-base-tweetner7-continuous`](https://huggingface.co/tner/bertweet-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.84 | 61.02 | | [`tner/bert-large-tweetner7-continuous`](https://huggingface.co/tner/bert-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.2 | 57.67 | | [`tner/roberta-large-tweetner7-2021`](https://huggingface.co/tner/roberta-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.05 | 59.11 | | [`tner/roberta-base-tweetner7-2021`](https://huggingface.co/tner/roberta-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 61.76 | 57 | | [`tner/twitter-roberta-base-dec2020-tweetner7-2021`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 63.98 | 58.91 | | [`tner/bertweet-large-tweetner7-2021`](https://huggingface.co/tner/bertweet-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 62.9 | 58.13 | | [`tner/bertweet-base-tweetner7-2021`](https://huggingface.co/tner/bertweet-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 63.09 | 57.35 | | [`tner/bert-large-tweetner7-2021`](https://huggingface.co/tner/bert-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 59.75 | 53.93 | | [`tner/bert-base-tweetner7-2021`](https://huggingface.co/tner/bert-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.67 | 55.5 | | [`tner/roberta-large-tweetner7-2020`](https://huggingface.co/tner/roberta-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.76 | 60 | | [`tner/roberta-base-tweetner7-2020`](https://huggingface.co/tner/roberta-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.21 | 59.11 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 64.28 | 59.31 | | [`tner/twitter-roberta-base-dec2020-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 62.87 | 58.26 | | [`tner/bertweet-large-tweetner7-2020`](https://huggingface.co/tner/bertweet-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.01 | 59.47 | | [`tner/bertweet-base-tweetner7-2020`](https://huggingface.co/tner/bertweet-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 64.06 | 59.44 | | [`tner/bert-large-tweetner7-2020`](https://huggingface.co/tner/bert-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 61.43 | 56.14 | | [`tner/bert-base-tweetner7-2020`](https://huggingface.co/tner/bert-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.09 | 54.67 | Model description follows below. * Model with suffix `-all`: Model fine-tuned on `train_all` and validated on `validation_2021`. * Model with suffix `-continuous`: Model fine-tuned on `train_2021` continuously after fine-tuning on `train_2020` and validated on `validation_2021`. * Model with suffix `-2021`: Model fine-tuned only on `train_2021` and validated on `validation_2021`. * Model with suffix `-2020`: Model fine-tuned only on `train_2021` and validated on `validation_2020`. ### Sub Models (used in ablation study) - Model fine-tuned only on `train_random` and validated on `validation_2020`. | Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) | |:------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:| | [`tner/roberta-large-tweetner7-random`](https://huggingface.co/tner/roberta-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.33 | 60.96 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 63.29 | 58.5 | | [`tner/roberta-base-tweetner7-random`](https://huggingface.co/tner/roberta-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.04 | 59.23 | | [`tner/twitter-roberta-base-dec2020-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 64.72 | 59.97 | | [`tner/bertweet-large-tweetner7-random`](https://huggingface.co/tner/bertweet-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.86 | 60.49 | | [`tner/bertweet-base-tweetner7-random`](https://huggingface.co/tner/bertweet-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.55 | 59.58 | | [`tner/bert-large-tweetner7-random`](https://huggingface.co/tner/bert-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 62.39 | 57.54 | | [`tner/bert-base-tweetner7-random`](https://huggingface.co/tner/bert-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.91 | 55.92 | - Model fine-tuned on the self-labeled dataset on `extra_{2020,2021}` and validated on `validation_2020`. | Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) | |:----------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:--------------------------------------------------------|------------------:|------------------:| | [`tner/roberta-large-tweetner7-selflabel2020`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.56 | 59.63 | | [`tner/roberta-large-tweetner7-selflabel2021`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.6 | 59.45 | | [`tner/roberta-large-tweetner7-2020-selflabel2020-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2020-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.46 | 60.39 | | [`tner/roberta-large-tweetner7-2020-selflabel2021-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2021-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.52 | 59.45 | | [`tner/roberta-large-tweetner7-selflabel2020-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.15 | 60.23 | | [`tner/roberta-large-tweetner7-selflabel2021-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.48 | 59.41 | Model description follows below. * Model with suffix `-self2020`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). * Model with suffix `-self2021`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). * Model with suffix `-2020-self2020-all`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2020` and `train_2020`. * Model with suffix `-2020-self2021-all`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2021` and `train_2020`. * Model with suffix `-2020-self2020-continuous`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`. * Model with suffix `-2020-self2021-continuous`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`. ### Reproduce Experimental Result To reproduce the experimental result on our AACL paper, please see the repository [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper). ## Citation Information ``` @inproceedings{ushio-etal-2022-tweet, title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts", author = "Ushio, Asahi and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco. and Camacho-Collados, Jose", booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing", month = nov, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", } ```
false
# Dataset Card for ciempiess_test ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CIEMPIESS-UNAM Project](http://www.ciempiess.org/) - **Repository:** [CIEMPIESS-TEST is part of LDC2019S07](https://catalog.ldc.upenn.edu/LDC2019S07) - **Paper:** [Creating Mexican Spanish Language Resources through the Social Service Program](https://aclanthology.org/2022.nidcp-1.4.pdf) - **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org) ### Dataset Summary When developing automatic speech recognition engines and any other machine learning system is a good practice to separate the test from the training data and never combined them. So, the CIEMPIESS TEST Corpus was created by this necessity of having an standard test set destined to measure the advances of the community of users of the CIEMPIESS datasets and we strongly recommend not to use the CIEMPIESS TEST for any other purpose. The CIEMPIESS TEST Corpus is a gender balanced corpus designed to test acoustic models for the speech recognition task. It was created by recordings and human transcripts of 10 male and 10 female speakers. The CIEMPIESS TEST Corpus is considered a CIEMPIESS dataset because it only contains audio from the same source of the first [CIEMPIESS Corpus](https://catalog.ldc.upenn.edu/LDC2015S07) and it has the word "TEST" in its name, obviously because it is recommended for test purposes only. ### Example Usage The CIEMPIESS TEST contains only the test split: ```python from datasets import load_dataset ciempiess_test = load_dataset("ciempiess/ciempiess_test") ``` It is also valid to do: ```python from datasets import load_dataset ciempiess_test = load_dataset("ciempiess/ciempiess_test",split="test") ``` ### Supported Tasks automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). ### Languages The language of the corpus is Spanish with the accent of Central Mexico except for the speaker M_09 that comes from El Salvador. ## Dataset Structure ### Data Instances ```python { 'audio_id': 'CMPT_M_07_0074', 'audio': { 'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/86a30fdc762ba3fad1e38fbe6900ea4940d6f0070af8d56aa483701faa050d51/test/male/M_07/CMPT_M_07_0074.flac', 'array': array([-0.00192261, -0.00234985, -0.00158691, ..., -0.00839233, -0.00900269, -0.00698853], dtype=float32), 'sampling_rate': 16000 }, 'speaker_id': 'M_07', 'gender': 'male', 'duration': 7.510000228881836, 'normalized_text': 'pues está la libertá de las posiciones de a ver quién es pasivo quién es activo blablablá muchas cosas no pero' } ``` ### Data Fields * `audio_id` (string) - id of audio segment * `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). * `speaker_id` (string) - id of speaker * `gender` (string) - gender of speaker (male or female) * `duration` (float32) - duration of the audio file in seconds. * `normalized_text` (string) - normalized audio segment transcription ### Data Splits The corpus counts just with the test split which has a total of 3558 speech files from 10 male speakers and 10 female speakers with a total duration of 8 hours and 8 minutes. ## Dataset Creation ### Curation Rationale The CIEMPIESS TEST (CT) Corpus has the following characteristics: * The CT has a total of 3558 audio files of 10 male speakers and 10 female speakers. It has a total duration of 8 hours and 8 minutes. * The total number of audio files that come from male speakers is 1694 with a total duration of 4 hours and 3 minutes. The total number of audio files that come from female speakers is 1864 with a total duration of 4 hours and 4 minutes. So CT is perfectly balanced in gender. * All of the speakers in the CT come from Mexico, except for the speaker M_09 that comes from El Salvador. * Every audio file in the CT has a duration between 5 and 10 seconds approximately. * Data in CT is classified by gender and also by speaker, so one can easily select audios from a particular set of speakers to do experiments. * Audio files in the CT and the first [CIEMPIESS](https://catalog.ldc.upenn.edu/LDC2015S07) are all of the same type. In both, speakers talk about legal and lawyer issues. They also talk about things related to the [UNAM University](https://www.unam.mx/) and the ["Facultad de Derecho de la UNAM"](https://www.derecho.unam.mx/). * As in the first CIEMPIESS Corpus, transcriptions in the CT were made by humans. * Speakers in the CT are not present in any other CIEMPIESS dataset. * Audio files in the CT are distributed in a 16khz@16bit mono format. ### Source Data #### Initial Data Collection and Normalization The CIEMPIESS TEST is a Radio Corpus designed to test acoustic models of automatic speech recognition and it is made out of recordings of spontaneous conversations in Spanish between a radio moderator and his guests. Most of the speech in these conversations has the accent of Central Mexico. All the recordings that constitute the CIEMPIESS TEST come from ["RADIO-IUS"](http://www.derecho.unam.mx/cultura-juridica/radio.php), a radio station belonging to UNAM. Recordings were donated by Lic. Cesar Gabriel Alanis Merchand and Mtro. Ricardo Rojas Arevalo from the "Facultad de Derecho de la UNAM" with the condition that they have to be used for academic and research purposes only. ### Annotations #### Annotation process The annotation process is at follows: * 1. A whole podcast is manually segmented keeping just the portions containing good quality speech. * 2. A second pass os segmentation is performed; this time to separate speakers and put them in different folders. * 3. The resulting speech files between 5 and 10 seconds are transcribed by students from different departments (computing, engineering, linguistics). Most of them are native speakers but not with a particular training as transcribers. #### Who are the annotators? The CIEMPIESS TEST Corpus was created by the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) of the ["Facultad de Ingeniería"](https://www.ingenieria.unam.mx/) (FI) in the ["Universidad Nacional Autónoma de México"](https://www.unam.mx/) (UNAM) between 2016 and 2018 by Carlos Daniel Hernández Mena, head of the program. ### Personal and Sensitive Information The dataset could contain names revealing the identity of some speakers; on the other side, the recordings come from publicly available podcasts, so, there is not a real intent of the participants to be anonymized. Anyway, you agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is challenging because it contains spontaneous speech; so, it will be helpful for the ASR community to evaluate their acoustic models in Spanish with it. ### Discussion of Biases The dataset intents to be gender balanced. It is comprised of 10 male speakers and 10 female speakers. On the other hand the vocabulary is limited to legal issues. ### Other Known Limitations The transcriptions in this dataset were revised by Mónica Alejandra Ruiz López during 2022 and they are slightly different from the transcriptions found at [LDC](https://catalog.ldc.upenn.edu/LDC2019S07) or at the [CIEMPIESS-UNAM Project](http://www.ciempiess.org/) official website. We strongly recommend to use these updated transcriptions; we will soon update the transcriptions in the rest of the repositories. ### Dataset Curators The dataset was collected by students belonging to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html), it was curated by Carlos Daniel Hernández Mena and its transcriptions were manually verified by Mónica Alejandra Ruiz López during 2022. ### Licensing Information [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/) ### Citation Information ``` @misc{carlosmenaciempiesstest2019, title={CIEMPIESS TEST CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.}, ldc_catalog_no={LDC2019S07}, DOI={https://doi.org/10.35111/xdx5-n815}, author={Hernandez Mena, Carlos Daniel}, journal={Linguistic Data Consortium, Philadelphia}, year={2019}, url={https://catalog.ldc.upenn.edu/LDC2019S07}, } ``` ### Contributions The authors want to thank to Alejandro V. Mena, Elena Vera and Angélica Gutiérrez for their support to the social service program: "Desarrollo de Tecnologías del Habla." We also thank to the social service students for all the hard work. We also thank to Lic. Cesar Gabriel Alanis Merchand and Mtro. Ricardo Rojas Arevalo from the "Facultad de Derecho de la UNAM" for donating all the recordings that constitute the CIEMPIESS TEST Corpus. Special thanks to Mónica Alejandra Ruiz López who performed a meticulous verification of the transcriptions of this dataset during 2022.
false
# Dataset Card for MKQA: Multilingual Knowledge Questions & Answers ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - [**Homepage:**](https://github.com/apple/ml-mkqa/) - [**Paper:**](https://arxiv.org/abs/2007.15207) ### Dataset Summary MKQA contains 10,000 queries sampled from the [Google Natural Questions dataset](https://github.com/google-research-datasets/natural-questions). For each query we collect new passage-independent answers. These queries and answers are then human translated into 25 Non-English languages. ### Supported Tasks and Leaderboards `question-answering` ### Languages | Language code | Language name | |---------------|---------------| | `ar` | `Arabic` | | `da` | `Danish` | | `de` | `German` | | `en` | `English` | | `es` | `Spanish` | | `fi` | `Finnish` | | `fr` | `French` | | `he` | `Hebrew` | | `hu` | `Hungarian` | | `it` | `Italian` | | `ja` | `Japanese` | | `ko` | `Korean` | | `km` | `Khmer` | | `ms` | `Malay` | | `nl` | `Dutch` | | `no` | `Norwegian` | | `pl` | `Polish` | | `pt` | `Portuguese` | | `ru` | `Russian` | | `sv` | `Swedish` | | `th` | `Thai` | | `tr` | `Turkish` | | `vi` | `Vietnamese` | | `zh_cn` | `Chinese (Simplified)` | | `zh_hk` | `Chinese (Hong kong)` | | `zh_tw` | `Chinese (Traditional)` | ## Dataset Structure ### Data Instances An example from the data set looks as follows: ``` { 'example_id': 563260143484355911, 'queries': { 'en': "who sings i hear you knocking but you can't come in", 'ru': "кто поет i hear you knocking but you can't come in", 'ja': '「 I hear you knocking」は誰が歌っていますか', 'zh_cn': "《i hear you knocking but you can't come in》是谁演唱的", ... }, 'query': "who sings i hear you knocking but you can't come in", 'answers': {'en': [{'type': 'entity', 'entity': 'Q545186', 'text': 'Dave Edmunds', 'aliases': []}], 'ru': [{'type': 'entity', 'entity': 'Q545186', 'text': 'Эдмундс, Дэйв', 'aliases': ['Эдмундс', 'Дэйв Эдмундс', 'Эдмундс Дэйв', 'Dave Edmunds']}], 'ja': [{'type': 'entity', 'entity': 'Q545186', 'text': 'デイヴ・エドモンズ', 'aliases': ['デーブ・エドモンズ', 'デイブ・エドモンズ']}], 'zh_cn': [{'type': 'entity', 'text': '戴维·埃德蒙兹 ', 'entity': 'Q545186'}], ... }, } ``` ### Data Fields Each example in the dataset contains the unique Natural Questions `example_id`, the original English `query`, and then `queries` and `answers` in 26 languages. Each answer is labelled with an answer type. The breakdown is: | Answer Type | Occurrence | |---------------|---------------| | `entity` | `4221` | | `long_answer` | `1815` | | `unanswerable` | `1427` | | `date` | `1174` | | `number` | `485` | | `number_with_unit` | `394` | | `short_phrase` | `346` | | `binary` | `138` | For each language, there can be more than one acceptable textual answer, in order to capture a variety of possible valid answers. Detailed explanation of fields taken from [here](https://github.com/apple/ml-mkqa/#dataset) when `entity` field is not available it is set to an empty string ''. when `aliases` field is not available it is set to an empty list []. ### Data Splits - Train: 10000 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [Google Natural Questions dataset](https://github.com/google-research-datasets/natural-questions) #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [CC BY-SA 3.0](https://github.com/apple/ml-mkqa#license) ### Citation Information ``` @misc{mkqa, title = {MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering}, author = {Shayne Longpre and Yi Lu and Joachim Daiber}, year = {2020}, URL = {https://arxiv.org/pdf/2007.15207.pdf} } ``` ### Contributions Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
false
# Dataset Card for NCSLGR ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.bu.edu/asllrp/ncslgr.html - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A small corpus of American Sign Language (ASL) video data from native signers, annotated with non-manual features. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - American Sign Language - English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - eaf: path to an ELAN annotation file - videos: sequence of strings to video paths - sentences: sequence of parallel sentences - gloss: American Sign Language gloss annotations - text: English text ### Data Splits None ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @misc{dataset:databases2007volumes, title={Volumes 2--7}, author={Databases, NCSLGR}, year={2007}, publisher={American Sign Language Linguistic Research Project (Distributed on CD-ROM~…} } ``` ### Contributions Thanks to [@AmitMY](https://github.com/AmitMY) for adding this dataset.
false
# Dataset Card for Taskmaster-1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Taskmaster-1](https://research.google/tools/datasets/taskmaster-1/) - **Repository:** [GitHub](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019) - **Paper:** [Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset](https://arxiv.org/abs/1909.05358) - **Leaderboard:** N/A - **Point of Contact:** [Taskmaster Googlegroup](taskmaster-datasets@googlegroups.com) ### Dataset Summary Taskmaster-1 is a goal-oriented conversational dataset. It includes 13,215 task-based dialogs comprising six domains. Two procedures were used to create this collection, each with unique advantages. The first involves a two-person, spoken "Wizard of Oz" (WOz) approach in which trained agents and crowdsourced workers interact to complete the task while the second is "self-dialog" in which crowdsourced workers write the entire dialog themselves. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is in English language. ## Dataset Structure ### Data Instances A typical example looks like this ``` { "conversation_id":"dlg-336c8165-068e-4b4b-803d-18ef0676f668", "instruction_id":"restaurant-table-2", "utterances":[ { "index":0, "segments":[ ], "speaker":"USER", "text":"Hi, I'm looking for a place that sells spicy wet hotdogs, can you think of any?" }, { "index":1, "segments":[ { "annotations":[ { "name":"restaurant_reservation.name.restaurant.reject" } ], "end_index":37, "start_index":16, "text":"Spicy Wet Hotdogs LLC" } ], "speaker":"ASSISTANT", "text":"You might enjoy Spicy Wet Hotdogs LLC." }, { "index":2, "segments":[ ], "speaker":"USER", "text":"That sounds really good, can you make me a reservation?" }, { "index":3, "segments":[ ], "speaker":"ASSISTANT", "text":"Certainly, when would you like a reservation?" }, { "index":4, "segments":[ { "annotations":[ { "name":"restaurant_reservation.num.guests" }, { "name":"restaurant_reservation.num.guests" } ], "end_index":20, "start_index":18, "text":"50" } ], "speaker":"USER", "text":"I have a party of 50 who want a really sloppy dog on Saturday at noon." } ] } ``` ### Data Fields Each conversation in the data file has the following structure: - `conversation_id`: A universally unique identifier with the prefix 'dlg-'. The ID has no meaning. - `utterances`: A list of utterances that make up the conversation. - `instruction_id`: A reference to the file(s) containing the user (and, if applicable, agent) instructions for this conversation. Each utterance has the following fields: - `index`: A 0-based index indicating the order of the utterances in the conversation. - `speaker`: Either USER or ASSISTANT, indicating which role generated this utterance. - `text`: The raw text of the utterance. In case of self dialogs (one_person_dialogs), this is written by the crowdsourced worker. In case of the WOz dialogs, 'ASSISTANT' turns are written and 'USER' turns are transcribed from the spoken recordings of crowdsourced workers. - `segments`: A list of various text spans with semantic annotations. Each segment has the following fields: - `start_index`: The position of the start of the annotation in the utterance text. - `end_index`: The position of the end of the annotation in the utterance text. - `text`: The raw text that has been annotated. - `annotations`: A list of annotation details for this segment. Each annotation has a single field: - `name`: The annotation name. ### Data Splits - one_person_dialogs The data in `one_person_dialogs` config is split into `train`, `dev` and `test` splits. | | train | validation | test | |--------------|-------:|------------:|------:| | N. Instances | 6168 | 770 | 770 | - woz_dialogs The data in `woz_dialogs` config has no default splits. | | train | |--------------|-------:| | N. Instances | 5507 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is licensed under `Creative Commons Attribution 4.0 License` ### Citation Information [More Information Needed] ``` @inproceedings{48484, title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset}, author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik}, year = {2019} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
false
# Self-instruct-starcoder ## Table of Contents - [Summary](#summary) - [Our approach](#our-approach) - [Dataset generation](#dataset-generation) - [Dataset quality](#dataset-quality) - [Post-processing](#post-processing) - [Self-consistency](#self-consistency) - [Uniqueness](#uniqueness) - [Compile](#compile) - [Dataset structure](#dataset-structure) - [Space](#space) ## Summary Self-instruct-starcoder is a dataset that was generated by prompting starcoder to generate new instructions based on some human-written seed instructions. The underlying process is explained in the paper [self-instruct](https://arxiv.org/abs/2212.10560). This algorithm gave birth to famous machine generated datasets such as [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Code Alpaca](https://github.com/sahil280114/codealpaca) which are two datasets obtained by prompting OpenAI `text-davinci-003` engine. ## Our approach While our method is similar to self-instruct and stanford alpaca, we included some relevant modifications to the pipeline to account for what we wanted. - Rather than using `text-davinci-003`, we chose to prompt [StarCoder](https://arxiv.org/abs/2305.06161) which is a 10x smaller LLM developed for code use cases. However, it is possible to use any decoder based LLM on the hub. - We changed our seed tasks in order to have the model generate code related tasks. We completed the seed tasks from code alpaca with 20 additional algorithm instructions. - We switched from the generation format `"instruction":` - `"input":` - `"output":` to the format `"instruction":` - `"output":` by concatenating each instruction and its input under the keyword `instruction`. We did so because the previous prompting format tended to make the model generate test cases as input and their solution as output, which is not what we wanted. - Finally, we incorporated the possibility to change the trigger word in the prompt. We thus replaced the `"instruction" :` keyword by `"Here is the correct solution to the problem ":` which resulted into much better generated instructions. ## Dataset generation The generation of the dataset was time consuming and we chose our parameters to limit the computational burden of our method. - Number of examples in context : 4 - 2 seed instructions - 2 machine generated instructions - Number of instructions to generate : 5000 - Stop words used in the generation : ["\n20", "20.", "20 ."] - Similarity threshold for rouge score : 0.7 ## Dataset quality StarCoder, while being a great model is not as capable as `text-davinci-003`. In the generation, the model quickly reach sort of a ceiling in terms of creativity. There are many instructions that are similar to each other, but it should not bother since they are not phrased the same. ## Post-processing Post-processing is an important part of the pipeline since it improves the quality of the dataset despite the fact that it implies getting rid of some examples. First we need to identify what we want to avoid : - A generated solution which does not answer to the corresponding instruction - An instruction that is too similar to another one. ### Self-consistency We imagined a process that we named **self-consistency**. The idea is to reverse-prompt the model to see if it can generate a sound instruction that corresponds to the solution (output) it is prompted with. This is a particularly difficult few-shot task, and unfortunately StarCoder does not perform incredibly well on it. With a few-shot parameters of `4` (all being seed tasks), the model is able to recover 1135 instructions out of 5003, which amount for 22.6% of the raw dataset. Fortunately, the inability for starcoder to generate instructions for some solutions does not mean we should get rid of them. For the solutions (outputs) with generated instructions, we can compare these with the ground truth. For that we can use [Sentence-BERT](https://arxiv.org/abs/1908.10084) because the comparison should focus the meaning rather than the word to word similarity ratio. We have about 771 instructions (~68%) with a similarity score >= 0.5 with their ground truth. These can be seen as high quality examples, they form the `curated` set. <p align="center"> <img src="https://huggingface.co/datasets/codeparrot/self-instruct-starcoder/resolve/main/output.png" alt="drawing" width="300", height="300"/> </p> ### Uniqueness Another approach that can be used to clean the raw dataset is to focus on distinct instructions. For a given instruction, we go through all the instructions generated before it to see if there is one with a similarity score >= 0.5. If it is the case, we remove that instruction. This process removes about 94% of the raw dataset, the remaining instructions form the `unique` set. ### Compile We also decided to build a set which contains solely the example featuring a code written in python 3 which does not code a compilation error. ## Dataset structure ```python from datasets import load_dataset dataset = load_dataset("codeparrot/self-instruct-starcoder") DatasetDict({ compile: Dataset({ features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'], num_rows: 3549 }) curated: Dataset({ features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'], num_rows: 771 }) raw: Dataset({ features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'], num_rows: 5003 }) unique: Dataset({ features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'], num_rows: 308 }) })) ``` |Field|Type|Description| |---|---|---| |instruction|string|Instruction| |output|string|Answer to the instruction| |most_similar|string|Dictionnary containing the 10 most similar instructions generated before the current instruction along with the similarity scores| |avg_similarity_score|float64| Average similarity score| ## Additional resources - [Space(self-instruct-starcoder)](https://huggingface.co/spaces/codeparrot/self-instruct-starcoder) - [Github Repository](https://github.com/ArmelRandy/Self-instruct) ## Citation ``` @misc{title={Self-Instruct-StarCoder}, author={Zebaze, Armel Randy}, doi={https://doi.org/10.57967/hf/0790}, } ```
false
# Dataset Card for "compguesswhat" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://compguesswhat.github.io/](https://compguesswhat.github.io/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 112.05 MB - **Size of the generated dataset:** 271.11 MB - **Total amount of disk used:** 383.16 MB ### Dataset Summary CompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference scene is an image in VisualGenome. Visit the website for more details: https://compguesswhat.github.io ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### compguesswhat-original - **Size of downloaded dataset files:** 107.21 MB - **Size of the generated dataset:** 174.37 MB - **Total amount of disk used:** 281.57 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "id": 2424, "image": "{\"coco_url\": \"http://mscoco.org/images/270512\", \"file_name\": \"COCO_train2014_000000270512.jpg\", \"flickr_url\": \"http://farm6.stat...", "objects": "{\"area\": [1723.5133056640625, 4838.5361328125, 287.44476318359375, 44918.7109375, 3688.09375, 522.1935424804688], \"bbox\": [[5.61...", "qas": { "answer": ["Yes", "No", "No", "Yes"], "id": [4983, 4996, 5006, 5017], "question": ["Is it in the foreground?", "Does it have wings?", "Is it a person?", "Is it a vehicle?"] }, "status": "success", "target_id": 1197044, "timestamp": "2016-07-08 15:07:38" } ``` #### compguesswhat-zero_shot - **Size of downloaded dataset files:** 4.84 MB - **Size of the generated dataset:** 96.74 MB - **Total amount of disk used:** 101.59 MB An example of 'nd_valid' looks as follows. ``` This example was too long and was cropped: { "id": 0, "image": { "coco_url": "https://s3.amazonaws.com/nocaps/val/004e21eb2e686f40.jpg", "date_captured": "2018-11-06 11:04:33", "file_name": "004e21eb2e686f40.jpg", "height": 1024, "id": 6, "license": 0, "open_images_id": "004e21eb2e686f40", "width": 768 }, "objects": "{\"IsOccluded\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], \"IsTruncated\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], \"area\": [3...", "status": "incomplete", "target_id": "004e21eb2e686f40_30" } ``` ### Data Fields The data fields are the same among all splits. #### compguesswhat-original - `id`: a `int32` feature. - `target_id`: a `int32` feature. - `timestamp`: a `string` feature. - `status`: a `string` feature. - `id`: a `int32` feature. - `file_name`: a `string` feature. - `flickr_url`: a `string` feature. - `coco_url`: a `string` feature. - `height`: a `int32` feature. - `width`: a `int32` feature. - `width`: a `int32` feature. - `height`: a `int32` feature. - `url`: a `string` feature. - `coco_id`: a `int32` feature. - `flickr_id`: a `string` feature. - `image_id`: a `string` feature. - `qas`: a dictionary feature containing: - `question`: a `string` feature. - `answer`: a `string` feature. - `id`: a `int32` feature. - `objects`: a dictionary feature containing: - `id`: a `int32` feature. - `bbox`: a `list` of `float32` features. - `category`: a `string` feature. - `area`: a `float32` feature. - `category_id`: a `int32` feature. - `segment`: a dictionary feature containing: - `feature`: a `float32` feature. #### compguesswhat-zero_shot - `id`: a `int32` feature. - `target_id`: a `string` feature. - `status`: a `string` feature. - `id`: a `int32` feature. - `file_name`: a `string` feature. - `coco_url`: a `string` feature. - `height`: a `int32` feature. - `width`: a `int32` feature. - `license`: a `int32` feature. - `open_images_id`: a `string` feature. - `date_captured`: a `string` feature. - `objects`: a dictionary feature containing: - `id`: a `string` feature. - `bbox`: a `list` of `float32` features. - `category`: a `string` feature. - `area`: a `float32` feature. - `category_id`: a `int32` feature. - `IsOccluded`: a `int32` feature. - `IsTruncated`: a `int32` feature. - `segment`: a dictionary feature containing: - `MaskPath`: a `string` feature. - `LabelName`: a `string` feature. - `BoxID`: a `string` feature. - `BoxXMin`: a `string` feature. - `BoxXMax`: a `string` feature. - `BoxYMin`: a `string` feature. - `BoxYMax`: a `string` feature. - `PredictedIoU`: a `string` feature. - `Clicks`: a `string` feature. ### Data Splits #### compguesswhat-original | |train|validation|test| |----------------------|----:|---------:|---:| |compguesswhat-original|46341| 9738|9621| #### compguesswhat-zero_shot | |nd_valid|od_valid|nd_test|od_test| |-----------------------|-------:|-------:|------:|------:| |compguesswhat-zero_shot| 5343| 5372| 13836| 13300| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{suglia2020compguesswhat, title={CompGuessWhat?!: a Multi-task Evaluation Framework for Grounded Language Learning}, author={Suglia, Alessandro, Konstas, Ioannis, Vanzo, Andrea, Bastianelli, Emanuele, Desmond Elliott, Stella Frank and Oliver Lemon}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@aleSuglia](https://github.com/aleSuglia), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
false
# Dataset Card for Pubmed Causal ## Dataset Description - **Paper:** https://aclanthology.org/D19-1473/ ### Dataset Summary This is the dataset used in the paper: Detecting Causal Language Use in Science Findings. ### Citation Information ``` @inproceedings{yu-etal-2019-detecting, title = "Detecting Causal Language Use in Science Findings", author = "Yu, Bei and Li, Yingya and Wang, Jun", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D19-1473", doi = "10.18653/v1/D19-1473", pages = "4664--4674", } ```
true
# Dataset Card for ItaCoLA ## Table of Contents - [Dataset Card for ItaCoLA](#dataset-card-for-itacola) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Acceptability Classification](#acceptability-classification) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Scores Configuration](#scores-configuration) - [Phenomena Configuration](#phenomena-configuration) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** [Github](https://github.com/dhfbk/ItaCoLA-dataset) - **Paper:** [Arxiv](http://ceur-ws.org/Vol-2765/paper169.pdf) - **Point of Contact:** [Daniela Trotta](dtrotta@unisa.it) ### Dataset Summary The Italian Corpus of Linguistic Acceptability includes almost 10k sentences taken from linguistic literature with a binary annotation made by the original authors themselves. The work is inspired by the English [Corpus of Linguistic Acceptability](https://nyu-mll.github.io/CoLA/). **Disclaimer**: *The ItaCoLA corpus is hosted on Github by the [Digital Humanities group at FBK](https://dh.fbk.eu/)*. It was introduced in the article [Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus](https://arxiv.org/abs/2109.12053) by [Daniela Trotta](https://dh.fbk.eu/author/daniela/), [Raffaele Guarasci](https://www.icar.cnr.it/persone/guarasci/), [Elisa Leonardelli](https://dh.fbk.eu/author/elisa/), [Sara Tonelli](https://dh.fbk.eu/author/sara/) ### Supported Tasks and Leaderboards #### Acceptability Classification The following table is taken from Table 4 of the original paper, where an LSTM and a BERT model pretrained on the Italian languages are fine-tuned on the `train` split of the corpus and evaluated respectively on the `test` split (*In-domain*, `in`) and on the acceptability portion of the [AcCompl-it] corpus (*Out-of-domain*, `out`). Models are evaluated with accuracy (*Acc.*) and Matthews Correlation Coefficient (*MCC*) in both settings. Results are averaged over 10 runs with ±stdev. error bounds. | | `in`, Acc.| `in`, MCC| `out`, Acc.|`out`, MCC| |---------:|-----------:|----------:|-----------:|---------:| |`LSTM` | 0.794 | 0.278 ± 0.029 | 0.605 | 0.147 ± 0.066 | |`ITA-BERT`| 0.904 | 0.603 ± 0.022 | 0.683 | 0.198 ± 0.036 | ### Languages The language data in ItaCoLA is in Italian (BCP-47 `it`) ## Dataset Structure ### Data Instances #### Scores Configuration The `scores` configuration contains sentences with acceptability judgments. An example from the `train` split of the `scores` config (default) is provided below. ```json { "unique_id": 1, "source": "Graffi_1994", "acceptability": 1, "sentence": "Quest'uomo mi ha colpito." } ``` The text is provided as-is, without further preprocessing or tokenization. The fields are the following: - `unique_id`: Unique identifier for the sentence across configurations. - `source`: Original source for the sentence. - `acceptability`: Binary score, 1 = acceptable, 0 = not acceptable. - `sentence`: The evaluated sentence. #### Phenomena Configuration The `phenomena` configuration contains a sample of sentences from `scores` that has been manually annotated to denote the presence of 9 linguistic phenomena. An example from the `train` split is provided below: ```json { "unique_id": 1, "source": "Graffi_1994", "acceptability": 1, "sentence": "Quest'uomo mi ha colpito.", "cleft_construction": 0, "copular_construction": 0, "subject_verb_agreement": 1, "wh_islands_violations": 0, "simple": 0, "question": 0, "auxiliary": 1, "bind": 0, "indefinite_pronouns": 0 } ``` For each one of the new fields, the value of the binary score denotes the presence (1) or the absence (0) of the respective phenomenon. Refer to the original paper for a detailed description of each phenomenon. ### Data Splits | config| train| test| |----------:|-----:|----:| |`scores` | 7801 | 975 | |`phenomena`| 2088 | - | ### Dataset Creation Please refer to the original article [Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus](https://arxiv.org/abs/2109.12053) for additional information on dataset creation. ## Additional Information ### Dataset Curators The authors are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com). ### Licensing Information No licensing information available. ### Citation Information Please cite the authors if you use these corpora in your work: ```bibtex @inproceedings{trotta-etal-2021-monolingual-cross, title = "Monolingual and Cross-Lingual Acceptability Judgments with the {I}talian {C}o{LA} corpus", author = "Trotta, Daniela and Guarasci, Raffaele and Leonardelli, Elisa and Tonelli, Sara", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.250", doi = "10.18653/v1/2021.findings-emnlp.250", pages = "2929--2940" } ```
true
# Dataset Card for Persian News Summary (pn_summary) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/hooshvare/pn-summary/ - **Paper:** https://arxiv.org/abs/2012.11204 - **Leaderboard:** [More Information Needed] - **Point of Contact:** [Mehrdad Farahani](mailto:m3hrdadfphi@gmail.com) ### Dataset Summary A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification. It is imperative to consider that the newlines were replaced with the `[n]` symbol. Please interpret them into normal newlines (for ex. `t.replace("[n]", "\n")`) and then use them for your purposes. ### Supported Tasks and Leaderboards The dataset is prepared for Abstractive/Extractive summarization tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification. ### Languages The dataset covers Persian mostly and somewhere a combination with English. ## Dataset Structure ### Data Instances A record consists of 8 features: ```python record = ['id','title', 'article', 'summary', 'category', 'categories', 'network', 'link'] ``` In the following, you can see an example of `pn_summmary`. ```json { "article": "به گزارش شانا، علی کاردر امروز (۲۷ دی ماه) در مراسم تودیع محسن قمصری، مدیر سابق امور بین الملل شرکت ملی نفت ایران و معارفه سعید خوشرو، مدیر جدید امور بین الملل این شرکت، گفت: مدیریت امور بین\u200eالملل به عنوان یکی از تاثیرگذارترین مدیریت\u200cهای شرکت ملی نفت ایران در دوران تحریم\u200cهای ظالمانه غرب علیه کشورمان بسیار هوشمندانه عمل کرد و ما توانستیم به خوبی از عهده تحریم\u200cها برآییم. [n] وی افزود: مجموعه امور بین الملل در همه دوران\u200cها با سختی\u200cها و مشکلات بسیاری مواجه بوده است، به ویژه در دوره اخیر به دلیل مسائل پیرامون تحریم وظیفه سنگینی بر عهده داشت که با تدبیر مدیریت خوب این مجموعه سربلند از آن بیرون آمد. [n] کاردر با قدردانی از زحمات محسن قمصری، به سلامت مدیریت امور بین الملل این شرکت اشاره کرد و افزود: محوریت کار مدیریت اموربین الملل سلامت مالی بوده است. [n] وی بر ضرورت نهادینه سازی جوانگرایی در مدیریت شرکت ملی نفت ایران تاکید کرد و گفت: مدیریت امور بین الملل در پرورش نیروهای زبده و کارآزموده آنچنان قوی عملکرده است که برای انتخاب مدیر جدید مشکلی وجود نداشت. [n] کاردر، حرفه\u200eای\u200eگری و کار استاندارد را از ویژگی\u200cهای مدیران این مدیریت برشمرد و گفت: نگاه جامع، خلاقیت و نوآوری و بکارگیری نیروهای جوان باید همچنان مد نظر مدیریت جدید امور بین الملل شرکت ملی نفت ایران باشد.", "categories": "نفت", "category": 5, "id": "738e296491f8b24c5aa63e9829fd249fb4428a66", "link": "https://www.shana.ir/news/275284/%D9%85%D8%AF%DB%8C%D8%B1%DB%8C%D8%AA-%D9%81%D8%B1%D9%88%D8%B4-%D9%86%D9%81%D8%AA-%D8%AF%D8%B1-%D8%AF%D9%88%D8%B1%D8%A7%D9%86-%D8%AA%D8%AD%D8%B1%DB%8C%D9%85-%D9%87%D9%88%D8%B4%D9%85%D9%86%D8%AF%D8%A7%D9%86%D9%87-%D8%B9%D9%85%D9%84-%DA%A9%D8%B1%D8%AF", "network": 2, "summary": "مدیرعامل شرکت ملی نفت، عملکرد مدیریت امور بین\u200eالملل این شرکت را در دوران تحریم بسیار هوشمندانه خواند و گفت: امور بین الملل در دوران پس از تحریم\u200eها نیز می\u200cتواند نقش بزرگی در تسریع روند توسعه داشته باشد.", "title": "مدیریت فروش نفت در دوران تحریم هوشمندانه عمل کرد" } ``` ### Data Fields - `id (string)`: ID of the news. - `title (string)`: The title of the news. - `article (string)`: The article of the news. - `summary (string)`: The summary of the news. - `category (int)`: The category of news in English (index of categories), including `Economy`, `Roads-Urban`, `Banking-Insurance`, `Agriculture`, `International`, `Oil-Energy`, `Industry`, `Transportation`, `Science-Technology`, `Local`, `Sports`, `Politics`, `Art-Culture`, `Society`, `Health`, `Research`, `Education-University`, `Tourism`. - `categories (string)`: The category and sub-category of the news in Persian. - `network (int)`: The news agency name (index of news agencies), including `Tahlilbazaar`, `Imna`, `Shana`, `Mehr`, `Irna`, `Khabaronline`. - `link (string)`: The link of the news. The category in English includes 18 different article categories from economy to tourism. ```bash Economy, Roads-Urban, Banking-Insurance, Agriculture, International, Oil-Energy, Industry, Transportation, Science-Technology, Local, Sports, Politics, Art-Culture, Society, Health, Research, Education-University, Tourism ``` ### Data Splits Training (82,022 records, 8 features), validation (5,592 records, 8 features), and test split (5,593 records and 8 features). ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The dataset comprises numerous articles of various categories that have been crawled from six news agency websites (Tahlilbazaar, Imna, Shana, Mehr, Irna, and Khabaronline). ### Annotations #### Annotation process Each record (article) includes the long original text as well as a human-generated summary. The total number of cleaned articles is 93,207 (from 200,000 crawled articles). #### Who are the annotators? The dataset was organized by [Mehrdad Farahani](https://github.com/m3hrdadfi), [Mohammad Gharachorloo](https://github.com/baarsaam) and [Mohammad Manthouri](https://github.com/mmanthouri) for this paper [Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization](https://arxiv.org/abs/2012.11204) ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was curated by [Mehrdad Farahani](https://github.com/m3hrdadfi), [Mohammad Gharachorloo](https://github.com/baarsaam) and [Mohammad Manthouri](https://github.com/mmanthouri). ### Licensing Information This dataset is licensed under MIT License. ### Citation Information ```bibtex @article{pnSummary, title={Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization}, author={Mehrdad Farahani, Mohammad Gharachorloo, Mohammad Manthouri}, year={2020}, eprint={2012.11204}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@m3hrdadfi](https://github.com/m3hrdadfi) for adding this dataset.
false
# Dataset Card for IK-NLP-22 Speech and Language Processing ## Table of Contents - [Dataset Card for IK-NLP-22 Speech and Language Processing](#dataset-card-for-ik-nlp-22-speech-and-language-processing) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Projects](#projects) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Paragraphs Configuration](#paragraphs-configuration) - [Questions Configuration](#questions-configuration) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Source:** [Stanford](https://web.stanford.edu/~jurafsky/slp3/) - **Point of Contact:** [Gabriele Sarti](mmailto:ik-nlp-course@rug.nl) ### Dataset Summary This dataset contains chapters extracted from the Speech and Language Processing book (3ed draft of January 2022) by Jurafsky and Martin via a semi-automatic procedure (see below for additional details). Moreover, a small set of conceptual questions associated with each chapter is provided alongside possible answers. Only the content of chapters 2 to 11 of the book draft are provided, since these are the ones relevant to the contents of the 2022 edition of the Natural Language Processing course at the Information Science Master's Degree (IK) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) with the assistance of [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti). *The Speech and Language Processing book was made freely available by the authors [Dan Jurafsky](http://web.stanford.edu/people/jurafsky/) and [James H. Martin](http://www.cs.colorado.edu/~martin/) on the [Stanford University website](https://web.stanford.edu/~jurafsky/slp3/). The present dataset was created for educational purposes, and is based on the draft of the 3rd edition of the book accessed on December 29th, 2021. All rights of the present contents are attributed to the original authors.* ### Projects See the course page for a description of possible research directions. ### Languages The language data of Speech and Language Processing is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances The dataset contains two configurations: `paragraphs` (default), containing the full set of parsed paragraphs associated to the respective chapter and sections, and `questions`, containing a small subset of example questions matched with the relevant paragraph, and with the answer span extracted. #### Paragraphs Configuration The `paragraphs` configuration contains all the paragraphs of the selected book chapters, each associated with the respective chapter, section and subsection. An example from the `train` split of the `paragraphs` config is provided below. The example belongs to section 2.3 but not to a subsection, so the `n_subsection` and `subsection` fields are empty strings. ```json { "n_chapter": "2", "chapter": "Regular Expressions", "n_section": "2.3", "section": "Corpora", "n_subsection": "", "subsection": "", "text": "It's also quite common for speakers or writers to use multiple languages in a single communicative act, a phenomenon called code switching. Code switching (2.2) Por primera vez veo a @username actually being hateful! it was beautiful:)" } ``` The text is provided as-is, without further preprocessing or tokenization. #### Questions Configuration The `questions` configuration contains a small subset of questions, the top retrieved paragraph relevant to the question and the answer spans. An example from the `test` split of the `questions` config is provided below. ```json { "chapter": "Regular Expressions", "section": "Regular Expressions", "subsection": "Basic Regular Expressions", "question": "What is the meaning of the Kleene star in Regex?", "paragraph": "This language consists of strings with a b, followed by at least two a's, followed by an exclamation point. The set of operators that allows us to say things like \"some number of as\" are based on the asterisk or *, commonly called the Kleene * (gen-Kleene * erally pronounced \"cleany star\"). The Kleene star means \"zero or more occurrences of the immediately previous character or regular expression\". So /a*/ means \"any string of zero or more as\". This will match a or aaaaaa, but it will also match Off Minor since the string Off Minor has zero a's. So the regular expression for matching one or more a is /aa*/, meaning one a followed by zero or more as. More complex patterns can also be repeated. So /[ab]*/ means \"zero or more a's or b's\" (not \"zero or more right square braces\"). This will match strings like aaaa or ababab or bbbb.", "answer": "The Kleene star means \"zero or more occurrences of the immediately previous character or regular expression\"" } ``` ### Data Splits | config| train| test| |------------:|-----:|----:| |`paragraphs` | 1697 | - | |`questions` | - | 59 | ### Dataset Creation The contents of the Speech and Language Processing book PDF were extracted using the [PDF to S2ORC JSON Converter](https://github.com/allenai/s2orc-doc2json) by AllenAI. The texts extracted by the converter were then manually cleaned to remove end-of-chapter exercises and other irrelevant content (e.g. tables, TikZ figures, etc.). Some issues in the parsed content were preserved in the final version to maintain a naturalistic setting for the associated projects, promoting the use of data filtering heuristics for students. The question-answer pairs were created manually by Gabriele Sarti. ## Additional Information ### Dataset Curators For problems on this 🤗 Datasets version, please contact us at [ik-nlp-course@rug.nl](mailto:ik-nlp-course@rug.nl). ### Licensing Information Please refer to the authors' websites for licensing information. ### Citation Information Please cite the authors if you use these corpora in your work: ```bibtex @book{slp3ed-iknlp2022, author = {Jurafsky, Daniel and Martin, James}, year = {2021}, month = {12}, pages = {1--235, 1--19}, title = {Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition}, volume = {3} } ```
false
# Dataset Card for Numeric Fused Heads ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [The Numeric Fused-Head demo](https://nlp.biu.ac.il/~lazary/fh/) - **Repository:** [Github Repo](https://github.com/yanaiela/num_fh) - **Paper:** [Where’s My Head? Definition, Dataset and Models for Numeric Fused-Heads Identification and Resolution](https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00280) - **Leaderboard:** [NLP Progress](http://nlpprogress.com/english/missing_elements.html) - **Point of Contact:** [Yanai Elazar](https://yanaiela.github.io), [Yoav Goldberg](https://www.cs.bgu.ac.il/~yoavg/uni/) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards - Numeric Fused Head Identification - Numeric Fused Head Resolution ### Languages English ## Dataset Structure ### Data Instances ## Identification ``` { "tokens": ["It", "’s", "a", "curious", "thing", ",", "the", "death", "of", "a", "loved", "one", "."] "start_index": 11 "end_index": 12 "label": 1 } ``` ## Resolution ``` { "tokens": ["I", "'m", "eighty", "tomorrow", ".", "Are", "you", "sure", "?"], "line_indices": [0, 0, 0, 0, 0, 1, 1, 1, 1], "head": ["AGE"], "speakers": ["John Doe", "John Doe", "John Doe", "John Doe", "John Doe", "Joe Bloggs", "Joe Bloggs", "Joe Bloggs", "Joe Bloggs"], "anchors_indices": [2] } ``` ### Data Fields ## Identification - `tokens` - List of token strings as tokenized with [Spacy](spacy.io). - `start_index` - Start index of the anchor. - `end_index` - End index of the anchor. - `label` - "pos" or "neg" depending on whether this example contains a numeric fused head. ## Resolution - `tokens` - List of token strings as tokenized with [Spacy](spacy.io) - `line_indices` - List of indices indicating line number (one for each token) - `head` - Reference to the missing head. If the head exists elsewhere in the sentence this is given as a token index. - `speakers` - List of speaker names (one for each token) - `anchors_indices` - Index to indicate which token is the anchor (the visible number) ### Data Splits Train, Test, Dev [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information MIT License ### Citation Information ``` @article{doi:10.1162/tacl\_a\_00280, author = {Elazar, Yanai and Goldberg, Yoav}, title = {Where’s My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution}, journal = {Transactions of the Association for Computational Linguistics}, volume = {7}, number = {}, pages = {519-535}, year = {2019}, doi = {10.1162/tacl\_a\_00280}, } ``` ### Contributions Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
false
# Dataset Card for GEM/sportsett_basketball ## Dataset Description - **Homepage:** https://github.com/nlgcat/sport_sett_basketball - **Repository:** https://github.com/nlgcat/sport_sett_basketball - **Paper:** https://aclanthology.org/2020.intellang-1.4/ - **Leaderboard:** N/A - **Point of Contact:** Craig Thomson ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/sportsett_basketball). ### Dataset Summary The sportsett dataset is an English data-to-text dataset in the basketball domain. The inputs are statistics summarizing an NBA game and the outputs are high-quality descriptions of the game in natural language. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/sportsett_basketball') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/sportsett_basketball). #### website [Github](https://github.com/nlgcat/sport_sett_basketball) #### paper [ACL Anthology](https://aclanthology.org/2020.intellang-1.4/) #### authors Craig Thomson, Ashish Upadhyay ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Github](https://github.com/nlgcat/sport_sett_basketball) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Github](https://github.com/nlgcat/sport_sett_basketball) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [ACL Anthology](https://aclanthology.org/2020.intellang-1.4/) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{thomson-etal-2020-sportsett, title = "{S}port{S}ett:Basketball - A robust and maintainable data-set for Natural Language Generation", author = "Thomson, Craig and Reiter, Ehud and Sripada, Somayajulu", booktitle = "Proceedings of the Workshop on Intelligent Information Processing and Natural Language Generation", month = sep, year = "2020", address = "Santiago de Compostela, Spain", publisher = "Association for Computational Lingustics", url = "https://aclanthology.org/2020.intellang-1.4", pages = "32--40", } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Craig Thomson #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> c.thomson@abdn.ac.uk #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Dialects <!-- info: What dialects are covered? Are there multiple dialects per language? --> <!-- scope: periscope --> American English One dialect, one language. #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English` #### Whose Language? <!-- info: Whose language is in the dataset? --> <!-- scope: periscope --> American sports writers #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> mit: MIT License #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> Maintain a robust and scalable Data-to-Text generation resource with structured data and textual summaries #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Data-to-Text #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> A model trained on this dataset should summarise the statistical and other information from a basketball game. This will be focused on a single game, although facts from prior games, or aggregate statistics over many games can and should be used for comparison where appropriate. There no single common narrative, although summaries usually start with who player, when, where, and the score. They then provide high level commentary on what the difference in the game was (why the winner won). breakdowns of statistics for prominent players follow, winning team first. Finally, the upcoming schedule for both teams is usually included. There are, however, other types of fact that can be included, and other narrative structures. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> University of Aberdeen, Robert Gordon University #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Craig Thomson, Ashish Upadhyay #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> EPSRC #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Craig Thomson, Ashish Upadhyay ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> Each instance in the dataset has five fields. 1. "sportsett_id": This is a unique id as used in the original SportSett database. It starts with '1' with the first instance in the train-set and ends with '6150' with the last instance in test-set. 2. "gem_id": This is a unique id created as per GEM's requirement which follows the `GEM-${DATASET_NAME}-${SPLIT-NAME}-${id}` pattern. 3. "game": This field contains a dictionary with information about current game. It has information such as date on which the game was played alongwith the stadium, city, state where it was played. 4. "teams": This filed is a dictionary of multiple nested dictionaries. On the highest level, it has two keys: 'home' and 'vis', which provide the stats for home team and visiting team of the game. Both are dictionaries with same structure. Each dictionary will contain team's information such as name of the team, their total wins/losses in current season, their conference standing, the SportSett ids for their current and previous games. Apart from these general information, they also have the box- and line- scores for the team in the game. Box score is the stats of players from the team at the end of the game, while line score along with the whole game stats is divided into quarters and halves as well as the extra-time (if happened in the game). After these scores, there is another field of next-game, which gives general information about team's next game such as the place and opponent's name of the next game. 5. "summaries": This is a list of summaries for each game. Some games will have more than one summary, in that case, the list will have more than one entries. Each summary in the list is a string which can be tokenised by a space, following the practices in RotoWire-FG dataset ([Wang, 2019](https://www.aclweb.org/anthology/W19-8639)). #### Reason for Structure <!-- info: How was the dataset structure determined? --> <!-- scope: microscope --> The structure mostly follows the original structure defined in RotoWire dataset ([Wiseman et. al. 2017](https://aclanthology.org/D17-1239/)) with some modifications (such as game and next-game keys) address the problem of information gap between input and output data ([Thomson et. al. 2020](https://aclanthology.org/2020.inlg-1.6/)). #### How were labels chosen? <!-- info: How were the labels chosen? --> <!-- scope: microscope --> Similar to RotoWire dataset ([Wiseman et. al. 2017](https://aclanthology.org/D17-1239/)) #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { "sportsett_id": "1", "gem_id": "GEM-sportsett_basketball-train-0", "game": { "day": "1", "month": "November", "year": "2014", "dayname": "Saturday", "season": "2014", "stadium": "Wells Fargo Center", "city": "Philadelphia", "state": "Pennsylvania", "attendance": "19753", "capacity": "20478", "game_id": "1" }, "teams": { "home": { "name": "76ers", "place": "Philadelphia", "conference": "Eastern Conference", "division": "Atlantic", "wins": "0", "losses": "3", "conference_standing": 15, "game_number": "3", "previous_game_id": "42", "next_game_id": "2", "line_score": { "game": { "FG3A": "23", "FG3M": "7", "FG3_PCT": "30", "FGA": "67", "FGM": "35", "FG_PCT": "52", "FTA": "26", "FTM": "19", "FT_PCT": "73", "DREB": "33", "OREB": "4", "TREB": "37", "BLK": "10", "AST": "28", "STL": "9", "TOV": "24", "PF": "21", "PTS": "96", "MIN": "4" }, "H1": { "FG3A": "82", "FG3M": "30", "FG3_PCT": "37", "FGA": "2115", "FGM": "138", "FG_PCT": "7", "FTA": "212", "FTM": "18", "FT_PCT": "8", "DREB": "810", "OREB": "21", "TREB": "831", "BLK": "51", "AST": "107", "STL": "21", "TOV": "64", "PTS": "3024", "MIN": "6060" }, "H2": { "FG3A": "85", "FG3M": "40", "FG3_PCT": "47", "FGA": "1615", "FGM": "104", "FG_PCT": "6", "FTA": "66", "FTM": "55", "FT_PCT": "83", "DREB": "96", "OREB": "10", "TREB": "106", "BLK": "22", "AST": "92", "STL": "24", "TOV": "68", "PTS": "2913", "MIN": "6060" }, "Q1": { "FG3A": "8", "FG3M": "3", "FG3_PCT": "38", "FGA": "21", "FGM": "13", "FG_PCT": "62", "FTA": "2", "FTM": "1", "FT_PCT": "50", "DREB": "8", "OREB": "2", "TREB": "10", "BLK": "5", "AST": "10", "STL": "2", "TOV": "6", "PTS": "30", "MIN": "60" }, "Q2": { "FG3A": "2", "FG3M": "0", "FG3_PCT": "0", "FGA": "15", "FGM": "8", "FG_PCT": "53", "FTA": "12", "FTM": "8", "FT_PCT": "67", "DREB": "10", "OREB": "1", "TREB": "11", "BLK": "1", "AST": "7", "STL": "1", "TOV": "4", "PTS": "24", "MIN": "60" }, "Q3": { "FG3A": "8", "FG3M": "4", "FG3_PCT": "50", "FGA": "16", "FGM": "10", "FG_PCT": "62", "FTA": "6", "FTM": "5", "FT_PCT": "83", "DREB": "9", "OREB": "1", "TREB": "10", "BLK": "2", "AST": "9", "STL": "2", "TOV": "6", "PTS": "29", "MIN": "60" }, "Q4": { "FG3A": "5", "FG3M": "0", "FG3_PCT": "0", "FGA": "15", "FGM": "4", "FG_PCT": "27", "FTA": "6", "FTM": "5", "FT_PCT": "83", "DREB": "6", "OREB": "0", "TREB": "6", "BLK": "2", "AST": "2", "STL": "4", "TOV": "8", "PTS": "13", "MIN": "60" }, "OT": { "FG3A": "0", "FG3M": "0", "FG3_PCT": "0", "FGA": "0", "FGM": "0", "FG_PCT": "0", "FTA": "0", "FTM": "0", "FT_PCT": "0", "DREB": "0", "OREB": "0", "TREB": "0", "BLK": "0", "AST": "0", "STL": "0", "TOV": "0", "PTS": "0", "MIN": "0" } }, "box_score": [ { "first_name": "Tony", "last_name": "Wroten", "name": "Tony Wroten", "starter": "True", "MIN": "33", "FGM": "6", "FGA": "11", "FG_PCT": "55", "FG3M": "1", "FG3A": "4", "FG3_PCT": "25", "FTM": "8", "FTA": "11", "FT_PCT": "73", "OREB": "0", "DREB": "3", "TREB": "3", "AST": "10", "STL": "1", "BLK": "1", "TOV": "4", "PF": "1", "PTS": "21", "+/-": "-11", "DOUBLE": "double" }, { "first_name": "Hollis", "last_name": "Thompson", "name": "Hollis Thompson", "starter": "True", "MIN": "32", "FGM": "4", "FGA": "8", "FG_PCT": "50", "FG3M": "2", "FG3A": "5", "FG3_PCT": "40", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "1", "TREB": "1", "AST": "2", "STL": "0", "BLK": "3", "TOV": "2", "PF": "2", "PTS": "10", "+/-": "-17", "DOUBLE": "none" }, { "first_name": "Henry", "last_name": "Sims", "name": "Henry Sims", "starter": "True", "MIN": "27", "FGM": "4", "FGA": "9", "FG_PCT": "44", "FG3M": "0", "FG3A": "0", "FG3_PCT": "0", "FTM": "1", "FTA": "2", "FT_PCT": "50", "OREB": "1", "DREB": "3", "TREB": "4", "AST": "2", "STL": "0", "BLK": "1", "TOV": "0", "PF": "1", "PTS": "9", "+/-": "-10", "DOUBLE": "none" }, { "first_name": "Nerlens", "last_name": "Noel", "name": "Nerlens Noel", "starter": "True", "MIN": "25", "FGM": "1", "FGA": "4", "FG_PCT": "25", "FG3M": "0", "FG3A": "0", "FG3_PCT": "0", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "5", "TREB": "5", "AST": "3", "STL": "1", "BLK": "1", "TOV": "3", "PF": "1", "PTS": "2", "+/-": "-19", "DOUBLE": "none" }, { "first_name": "Luc", "last_name": "Mbah a Moute", "name": "Luc Mbah a Moute", "starter": "True", "MIN": "19", "FGM": "4", "FGA": "10", "FG_PCT": "40", "FG3M": "0", "FG3A": "2", "FG3_PCT": "0", "FTM": "1", "FTA": "2", "FT_PCT": "50", "OREB": "3", "DREB": "4", "TREB": "7", "AST": "3", "STL": "1", "BLK": "0", "TOV": "6", "PF": "3", "PTS": "9", "+/-": "-12", "DOUBLE": "none" }, { "first_name": "Brandon", "last_name": "Davies", "name": "Brandon Davies", "starter": "False", "MIN": "23", "FGM": "7", "FGA": "9", "FG_PCT": "78", "FG3M": "1", "FG3A": "2", "FG3_PCT": "50", "FTM": "3", "FTA": "4", "FT_PCT": "75", "OREB": "0", "DREB": "3", "TREB": "3", "AST": "0", "STL": "3", "BLK": "0", "TOV": "3", "PF": "3", "PTS": "18", "+/-": "-1", "DOUBLE": "none" }, { "first_name": "Chris", "last_name": "Johnson", "name": "Chris Johnson", "starter": "False", "MIN": "21", "FGM": "2", "FGA": "4", "FG_PCT": "50", "FG3M": "1", "FG3A": "3", "FG3_PCT": "33", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "2", "TREB": "2", "AST": "0", "STL": "3", "BLK": "0", "TOV": "2", "PF": "5", "PTS": "5", "+/-": "3", "DOUBLE": "none" }, { "first_name": "K.J.", "last_name": "McDaniels", "name": "K.J. McDaniels", "starter": "False", "MIN": "20", "FGM": "2", "FGA": "4", "FG_PCT": "50", "FG3M": "1", "FG3A": "3", "FG3_PCT": "33", "FTM": "3", "FTA": "4", "FT_PCT": "75", "OREB": "0", "DREB": "1", "TREB": "1", "AST": "2", "STL": "0", "BLK": "3", "TOV": "2", "PF": "3", "PTS": "8", "+/-": "-10", "DOUBLE": "none" }, { "first_name": "Malcolm", "last_name": "Thomas", "name": "Malcolm Thomas", "starter": "False", "MIN": "19", "FGM": "4", "FGA": "4", "FG_PCT": "100", "FG3M": "0", "FG3A": "0", "FG3_PCT": "0", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "9", "TREB": "9", "AST": "0", "STL": "0", "BLK": "0", "TOV": "0", "PF": "2", "PTS": "8", "+/-": "-6", "DOUBLE": "none" }, { "first_name": "Alexey", "last_name": "Shved", "name": "Alexey Shved", "starter": "False", "MIN": "14", "FGM": "1", "FGA": "4", "FG_PCT": "25", "FG3M": "1", "FG3A": "4", "FG3_PCT": "25", "FTM": "3", "FTA": "3", "FT_PCT": "100", "OREB": "0", "DREB": "1", "TREB": "1", "AST": "6", "STL": "0", "BLK": "0", "TOV": "2", "PF": "0", "PTS": "6", "+/-": "-7", "DOUBLE": "none" }, { "first_name": "JaKarr", "last_name": "Sampson", "name": "JaKarr Sampson", "starter": "False", "MIN": "2", "FGM": "0", "FGA": "0", "FG_PCT": "0", "FG3M": "0", "FG3A": "0", "FG3_PCT": "0", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "1", "TREB": "1", "AST": "0", "STL": "0", "BLK": "1", "TOV": "0", "PF": "0", "PTS": "0", "+/-": "0", "DOUBLE": "none" }, { "first_name": "Michael", "last_name": "Carter-Williams", "name": "Michael Carter-Williams", "starter": "False", "MIN": "0", "FGM": "0", "FGA": "0", "FG_PCT": "0", "FG3M": "0", "FG3A": "0", "FG3_PCT": "0", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "0", "TREB": "0", "AST": "0", "STL": "0", "BLK": "0", "TOV": "0", "PF": "0", "PTS": "0", "+/-": "0", "DOUBLE": "none" } ], "next_game": { "day": "3", "month": "November", "year": "2014", "dayname": "Monday", "stadium": "Wells Fargo Center", "city": "Philadelphia", "opponent_name": "Rockets", "opponent_place": "Houston", "is_home": "True" } }, "vis": { "name": "Heat", "place": "Miami", "conference": "Eastern Conference", "division": "Southeast", "wins": "2", "losses": "0", "conference_standing": 1, "game_number": "2", "previous_game_id": "329", "next_game_id": "330", "line_score": { "game": { "FG3A": "24", "FG3M": "12", "FG3_PCT": "50", "FGA": "83", "FGM": "41", "FG_PCT": "49", "FTA": "29", "FTM": "20", "FT_PCT": "69", "DREB": "26", "OREB": "9", "TREB": "35", "BLK": "0", "AST": "33", "STL": "16", "TOV": "16", "PF": "20", "PTS": "114", "MIN": "4" }, "H1": { "FG3A": "69", "FG3M": "44", "FG3_PCT": "64", "FGA": "2321", "FGM": "1110", "FG_PCT": "48", "FTA": "106", "FTM": "64", "FT_PCT": "60", "DREB": "35", "OREB": "23", "TREB": "58", "BLK": "00", "AST": "88", "STL": "53", "TOV": "34", "PTS": "3228", "MIN": "6060" }, "H2": { "FG3A": "45", "FG3M": "22", "FG3_PCT": "49", "FGA": "1920", "FGM": "1010", "FG_PCT": "53", "FTA": "85", "FTM": "55", "FT_PCT": "65", "DREB": "612", "OREB": "22", "TREB": "634", "BLK": "00", "AST": "98", "STL": "35", "TOV": "36", "PTS": "2727", "MIN": "6060" }, "Q1": { "FG3A": "6", "FG3M": "4", "FG3_PCT": "67", "FGA": "23", "FGM": "11", "FG_PCT": "48", "FTA": "10", "FTM": "6", "FT_PCT": "60", "DREB": "3", "OREB": "2", "TREB": "5", "BLK": "0", "AST": "8", "STL": "5", "TOV": "3", "PTS": "32", "MIN": "60" }, "Q2": { "FG3A": "9", "FG3M": "4", "FG3_PCT": "44", "FGA": "21", "FGM": "10", "FG_PCT": "48", "FTA": "6", "FTM": "4", "FT_PCT": "67", "DREB": "5", "OREB": "3", "TREB": "8", "BLK": "0", "AST": "8", "STL": "3", "TOV": "4", "PTS": "28", "MIN": "60" }, "Q3": { "FG3A": "4", "FG3M": "2", "FG3_PCT": "50", "FGA": "19", "FGM": "10", "FG_PCT": "53", "FTA": "8", "FTM": "5", "FT_PCT": "62", "DREB": "6", "OREB": "2", "TREB": "8", "BLK": "0", "AST": "9", "STL": "3", "TOV": "3", "PTS": "27", "MIN": "60" }, "Q4": { "FG3A": "5", "FG3M": "2", "FG3_PCT": "40", "FGA": "20", "FGM": "10", "FG_PCT": "50", "FTA": "5", "FTM": "5", "FT_PCT": "100", "DREB": "12", "OREB": "2", "TREB": "14", "BLK": "0", "AST": "8", "STL": "5", "TOV": "6", "PTS": "27", "MIN": "60" }, "OT": { "FG3A": "0", "FG3M": "0", "FG3_PCT": "0", "FGA": "0", "FGM": "0", "FG_PCT": "0", "FTA": "0", "FTM": "0", "FT_PCT": "0", "DREB": "0", "OREB": "0", "TREB": "0", "BLK": "0", "AST": "0", "STL": "0", "TOV": "0", "PTS": "0", "MIN": "0" } }, "box_score": [ { "first_name": "Chris", "last_name": "Bosh", "name": "Chris Bosh", "starter": "True", "MIN": "33", "FGM": "9", "FGA": "17", "FG_PCT": "53", "FG3M": "2", "FG3A": "5", "FG3_PCT": "40", "FTM": "10", "FTA": "11", "FT_PCT": "91", "OREB": "3", "DREB": "5", "TREB": "8", "AST": "4", "STL": "2", "BLK": "0", "TOV": "3", "PF": "2", "PTS": "30", "+/-": "10", "DOUBLE": "none" }, { "first_name": "Dwyane", "last_name": "Wade", "name": "Dwyane Wade", "starter": "True", "MIN": "32", "FGM": "4", "FGA": "18", "FG_PCT": "22", "FG3M": "0", "FG3A": "1", "FG3_PCT": "0", "FTM": "1", "FTA": "3", "FT_PCT": "33", "OREB": "1", "DREB": "2", "TREB": "3", "AST": "10", "STL": "3", "BLK": "0", "TOV": "6", "PF": "1", "PTS": "9", "+/-": "13", "DOUBLE": "none" }, { "first_name": "Luol", "last_name": "Deng", "name": "Luol Deng", "starter": "True", "MIN": "29", "FGM": "7", "FGA": "11", "FG_PCT": "64", "FG3M": "1", "FG3A": "3", "FG3_PCT": "33", "FTM": "0", "FTA": "1", "FT_PCT": "0", "OREB": "2", "DREB": "2", "TREB": "4", "AST": "2", "STL": "2", "BLK": "0", "TOV": "1", "PF": "0", "PTS": "15", "+/-": "4", "DOUBLE": "none" }, { "first_name": "Shawne", "last_name": "Williams", "name": "Shawne Williams", "starter": "True", "MIN": "29", "FGM": "5", "FGA": "9", "FG_PCT": "56", "FG3M": "3", "FG3A": "5", "FG3_PCT": "60", "FTM": "2", "FTA": "2", "FT_PCT": "100", "OREB": "0", "DREB": "4", "TREB": "4", "AST": "4", "STL": "1", "BLK": "0", "TOV": "1", "PF": "4", "PTS": "15", "+/-": "16", "DOUBLE": "none" }, { "first_name": "Norris", "last_name": "Cole", "name": "Norris Cole", "starter": "True", "MIN": "27", "FGM": "4", "FGA": "7", "FG_PCT": "57", "FG3M": "2", "FG3A": "4", "FG3_PCT": "50", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "1", "TREB": "1", "AST": "4", "STL": "2", "BLK": "0", "TOV": "0", "PF": "1", "PTS": "10", "+/-": "6", "DOUBLE": "none" }, { "first_name": "Mario", "last_name": "Chalmers", "name": "Mario Chalmers", "starter": "False", "MIN": "25", "FGM": "6", "FGA": "9", "FG_PCT": "67", "FG3M": "2", "FG3A": "2", "FG3_PCT": "100", "FTM": "6", "FTA": "10", "FT_PCT": "60", "OREB": "0", "DREB": "2", "TREB": "2", "AST": "4", "STL": "4", "BLK": "0", "TOV": "0", "PF": "1", "PTS": "20", "+/-": "18", "DOUBLE": "none" }, { "first_name": "Shabazz", "last_name": "Napier", "name": "Shabazz Napier", "starter": "False", "MIN": "20", "FGM": "2", "FGA": "3", "FG_PCT": "67", "FG3M": "1", "FG3A": "2", "FG3_PCT": "50", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "3", "TREB": "3", "AST": "4", "STL": "2", "BLK": "0", "TOV": "1", "PF": "4", "PTS": "5", "+/-": "11", "DOUBLE": "none" }, { "first_name": "Chris", "last_name": "Andersen", "name": "Chris Andersen", "starter": "False", "MIN": "17", "FGM": "0", "FGA": "2", "FG_PCT": "0", "FG3M": "0", "FG3A": "0", "FG3_PCT": "0", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "1", "DREB": "2", "TREB": "3", "AST": "0", "STL": "0", "BLK": "0", "TOV": "0", "PF": "2", "PTS": "0", "+/-": "6", "DOUBLE": "none" }, { "first_name": "Josh", "last_name": "McRoberts", "name": "Josh McRoberts", "starter": "False", "MIN": "11", "FGM": "1", "FGA": "3", "FG_PCT": "33", "FG3M": "0", "FG3A": "1", "FG3_PCT": "0", "FTM": "1", "FTA": "2", "FT_PCT": "50", "OREB": "0", "DREB": "3", "TREB": "3", "AST": "0", "STL": "0", "BLK": "0", "TOV": "2", "PF": "3", "PTS": "3", "+/-": "1", "DOUBLE": "none" }, { "first_name": "James", "last_name": "Ennis", "name": "James Ennis", "starter": "False", "MIN": "7", "FGM": "2", "FGA": "3", "FG_PCT": "67", "FG3M": "1", "FG3A": "1", "FG3_PCT": "100", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "1", "DREB": "1", "TREB": "2", "AST": "1", "STL": "0", "BLK": "0", "TOV": "0", "PF": "1", "PTS": "5", "+/-": "2", "DOUBLE": "none" }, { "first_name": "Justin", "last_name": "Hamilton", "name": "Justin Hamilton", "starter": "False", "MIN": "5", "FGM": "1", "FGA": "1", "FG_PCT": "100", "FG3M": "0", "FG3A": "0", "FG3_PCT": "0", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "1", "DREB": "1", "TREB": "2", "AST": "0", "STL": "0", "BLK": "0", "TOV": "1", "PF": "0", "PTS": "2", "+/-": "3", "DOUBLE": "none" }, { "first_name": "Andre", "last_name": "Dawkins", "name": "Andre Dawkins", "starter": "False", "MIN": "1", "FGM": "0", "FGA": "0", "FG_PCT": "0", "FG3M": "0", "FG3A": "0", "FG3_PCT": "0", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "0", "TREB": "0", "AST": "0", "STL": "0", "BLK": "0", "TOV": "1", "PF": "1", "PTS": "0", "+/-": "0", "DOUBLE": "none" }, { "first_name": "Shannon", "last_name": "Brown", "name": "Shannon Brown", "starter": "False", "MIN": "0", "FGM": "0", "FGA": "0", "FG_PCT": "0", "FG3M": "0", "FG3A": "0", "FG3_PCT": "0", "FTM": "0", "FTA": "0", "FT_PCT": "0", "OREB": "0", "DREB": "0", "TREB": "0", "AST": "0", "STL": "0", "BLK": "0", "TOV": "0", "PF": "0", "PTS": "0", "+/-": "0", "DOUBLE": "none" } ], "next_game": { "day": "2", "month": "November", "year": "2014", "dayname": "Sunday", "stadium": "American Airlines Arena", "city": "Miami", "opponent_name": "Raptors", "opponent_place": "Toronto", "is_home": "True" } } }, "summaries": [ "The Miami Heat ( 20 ) defeated the Philadelphia 76ers ( 0 - 3 ) 114 - 96 on Saturday . Chris Bosh scored a game - high 30 points to go with eight rebounds in 33 minutes . Josh McRoberts made his Heat debut after missing the entire preseason recovering from toe surgery . McRoberts came off the bench and played 11 minutes . Shawne Williams was once again the starter at power forward in McRoberts ' stead . Williams finished with 15 points and three three - pointers in 29 minutes . Mario Chalmers scored 18 points in 25 minutes off the bench . Luc Richard Mbah a Moute replaced Chris Johnson in the starting lineup for the Sixers on Saturday . Hollis Thompson shifted down to the starting shooting guard job to make room for Mbah a Moute . Mbah a Moute finished with nine points and seven rebounds in 19 minutes . K.J . McDaniels , who suffered a minor hip flexor injury in Friday 's game , was available and played 21 minutes off the bench , finishing with eight points and three blocks . Michael Carter-Williams is expected to be out until Nov. 13 , but Tony Wroten continues to put up impressive numbers in Carter-Williams ' absence . Wroten finished with a double - double of 21 points and 10 assists in 33 minutes . The Heat will complete a back - to - back set at home Sunday against the Tornoto Raptors . The Sixers ' next game is at home Monday against the Houston Rockets ." ] } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> - Train: NBA seasons - 2014, 2015, & 2016; total instances - 3690 - Validation: NBA seasons - 2017; total instances - 1230 - Test: NBA seasons - 2018; total instances - 1230 #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> The splits were created as per different NBA seasons. All the games in regular season (no play-offs) are added in the dataset ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> This dataset contains a data analytics problem in the classic sense ([Reiter, 2007](https://aclanthology.org/W07-2315)). That is, there is a large amount of data from which insights need to be selected. Further, the insights should be both from simple shallow queries (such as dirext transcriptions of the properties of a subject, i.e., a player and their statistics), as well as aggregated (how a player has done over time). There is far more on the data side than is required to be realised, and indeed, could be practically realised. This depth of data analytics problem does not exist in other datasets. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> no #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> Many, if not all aspects of data-to-text systems can be measured with this dataset. It has complex data analytics, meaninful document planning (10-15 sentence documents with a narrative structure), as well as microplanning and realisation requirements. Finding models to handle this volume of data, as well as methods for meaninfully evaluate generations is a very open question. ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> no #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> For dataset discussion see [Thomson et al, 2020](https://aclanthology.org/2020.intellang-1.4/) For evaluation see: - [Thomson & Reiter 2020, Thomson & Reiter (2021)](https://aclanthology.org/2021.inlg-1.23) - [Kasner et al (2021)](https://aclanthology.org/2021.inlg-1.25) For a system using the relational database form of SportSett, see: - [Thomson et al (2020)](https://aclanthology.org/2020.inlg-1.6/) For recent systems using the Rotowire dataset, see: - [Puduppully & Lapata (2021)](https://github.com/ratishsp/data2text-macro-plan-py) - [Rebuffel et all (2020)](https://github.com/KaijuML/data-to-text-hierarchical) ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Many, if not all aspects of data-to-text systems can be measured with this dataset. It has complex data analytics, meaninful document planning (10-15 sentence documents with a narrative structure), as well as microplanning and realisation requirements. Finding models to handle this volume of data, as well as methods for meaninfully evaluate generations is a very open question. #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `BLEU` #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> BLEU is the only off-the-shelf metric commonly used. Works have also used custom metrics like RG ([Wiseman et al, 2017](https://aclanthology.org/D17-1239)), and a recent shared task explored other metrics and their corrolation with human evaluation ([Thomson & Reiter, 2021](https://aclanthology.org/2021.inlg-1.23)). #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes #### Other Evaluation Approaches <!-- info: What evaluation approaches have others used? --> <!-- scope: periscope --> Most results from prior works use the original Rotowire dataset, which has train/validation/test contamination. For results of BLEU and RG on the relational database format of SportSett, as a guide, see [Thomson et al, 2020](https://aclanthology.org/2020.inlg-1.6). #### Relevant Previous Results <!-- info: What are the most relevant previous results for this task/dataset? --> <!-- scope: microscope --> The results on this dataset are largely unexplored, as is the selection of suitable metrics that correlate with human judgment. See Thomson et al, 2021 (https://aclanthology.org/2021.inlg-1.23) for an overview, and Kasner et al (2021) for the best performing metric at the time of writing (https://aclanthology.org/2021.inlg-1.25). ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> The references texts were taken from the existing dataset RotoWire-FG ([Wang, 2019](https://www.aclweb.org/anthology/W19-8639)), which is in turn based on Rotowire ([Wiseman et al, 2017](https://aclanthology.org/D17-1239)). The rationale behind this dataset was to re-structure the data such that aggregate statistics over multiple games, as well as upcoming game schedules could be included, moving the dataset from snapshots of single games, to a format where almost everything that could be present in the reference texts could be found in the data. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> Create a summary of a basketball, with insightful facts about the game, teams, and players, both within the game, withing periods during the game, and over the course of seasons/careers where appropriate. This is a data-to-text problem in the classic sense ([Reiter, 2007](https://aclanthology.org/W07-2315)) in that it has a difficult data analystics state, in addition to ordering and transcription of selected facts. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> yes #### Source Details <!-- info: List the sources (one per line) --> <!-- scope: periscope --> RotoWire-FG (https://www.rotowire.com). Wikipedia (https://en.wikipedia.org/wiki/Main_Page) Basketball Reference (https://www.basketball-reference.com) ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Multiple websites` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> None #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> Summaries of basketball games (in the NBA). #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> not validated #### Data Preprocessing <!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) --> <!-- scope: microscope --> It retains the original tokenization scheme employed by Wang 2019 #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> manually #### Filter Criteria <!-- info: What were the selection criteria? --> <!-- scope: microscope --> Games from the 2014 through 2018 seasons were selected. Within these seasons games are not filtered, all are present, but this was an arbitrary solution from the original RotoWirte-FG dataset. ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> none #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no #### Justification for Using the Data <!-- info: If not, what is the justification for reusing the data? --> <!-- scope: microscope --> The dataset consits of a pre-existing dataset, as well as publically available facts. ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> unlikely #### Categories of PII <!-- info: What categories of PII are present or suspected in the data? --> <!-- scope: periscope --> `generic PII` #### Any PII Identification? <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? --> <!-- scope: periscope --> no identification ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> no ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> yes #### Links and Summaries of Analysis Work <!-- info: Provide links to and summaries of works analyzing these biases. --> <!-- scope: microscope --> Unaware of any work, but, this is a dataset considting solely of summaries of mens professional basketball games. It does not cover different levels of the sport, or different genders, and all pronouns are likely to be male unless a specific player is referred to by other pronouns in the training text. This makes it difficult to train systems where gender can be specified as an attribute, although it is an interesting, open problem that could be investigated using the dataset. #### Are the Language Producers Representative of the Language? <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? --> <!-- scope: periscope --> No, it is very specifically American English from the sports journalism domain. ## Considerations for Using the Data ### PII Risks and Liability #### Potential PII Risk <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. --> <!-- scope: microscope --> All information relating to persons is of public record. ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `public domain` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `public domain` ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> SportSett resolved the major overlap problems of RotoWire, although some overlap is unavoidable. For example, whilst it is not possible to find career totals and other historic information for all players (the data only goes back to 2014), it is possible to do so for some players. It is unavoidable that some data which is aggregated, exists in its base form in previous partitions. The season-based partition scheme heavily constrains this however. #### Unsuited Applications <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. --> <!-- scope: microscope --> Factual accuray continues to be a problem, systems may incorrectly represent the facts of the game. #### Discouraged Use Cases <!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. --> <!-- scope: microscope --> Using the RG metric to maximise the number of true facts in a generate summary is not nececeraly
false
# Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
false
# Dataset Card for the RegIR datasets ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://archive.org/details/eacl2021_regir_datasets - **Repository:** https://archive.org/details/eacl2021_regir_datasets - **Paper:** https://arxiv.org/abs/2101.10726 - **Leaderboard:** N/A - **Point of Contact:** [Ilias Chalkidis](mailto:ihalk@aueb.gr) ### Dataset Summary The European Union (EU) has a legislation scheme analogous to regulatory compliance for organizations. According to the Treaty on the Functioning of the European Union (TFEU), all published EU directives must take effect at the national level. Thus, all EU member states must adopt a law to transpose a newly issued directive within the period set by the directive (typically 2 years). Here, we have two datasets, EU2UK and UK2EU, containing EU directives and UK regulations, which can serve both as queries and documents under the ground truth assumption that a UK law is relevant to the EU directives it transposes and vice versa. ### Supported Tasks and Leaderboards The dataset supports: **EU2UK** (`eu2uk`): Given an EU directive *Q*, retrieve the set of relevant documents from the pool of all available UK regulations. Relevant documents are those that transpose the EU directive (*Q*). **UK2EU** (`uk2eu`): Given a UK regulation *Q*, retrieve the set of relevant documents from the pool of all available EU directives. Relevant documents are those that are being transposed by the UK regulations (*Q*). ### Languages All documents are written in English. ## Dataset Structure ### Data Instances ```json { "document_id": "31977L0794", "publication_year": "1977", "text": "Commission Directive 77/794/EEC ... of agricultural levies and customs duties", "relevant_documents": ["UKPGA19800048", "UKPGA19770036"] } ``` ### Data Fields The following data fields are provided for query documents (`train`, `dev`, `test`): `document_id`: (**str**) The ID of the document.\ `publication_year`: (**str**) The publication year of the document.\ `text`: (**str**) The text of the document.\ `relevant_documents`: (**List[str]**) The list of relevant documents, as represented by their `document_id`. The following data fields are provided for corpus documents (`corpus`): `document_id`: (**str**) The ID of the document.\ `publication_year`: (**str**) The publication year of the document.\ `text`: (**str**) The text of the document.\ ### Data Splits #### EU2UK dataset | Split | No of Queries | Avg. relevant documents | | ------------------- | ------------------------------------ | --- | | Train | 1,400 | 1.79 | |Development | 300 | 2.09 | |Test | 300 | 1.74 | Document Pool (Corpus): 52,515 UK regulations #### UK2EU dataset | Split | No of Queries | Avg. relevant documents | | ------------------- | ------------------------------------ | --- | | Train | 1,500 | 1.90 | |Development | 300 | 1.46 | |Test | 300 | 1.29 | Document Pool (Corpus): 3,930 EU directives ## Dataset Creation ### Curation Rationale The dataset was curated by Chalkidis et al. (2021).\ The transposition pairs are publicly available by the Publications Office of EU (https://publications.europa.eu/en). ### Source Data #### Initial Data Collection and Normalization The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) and Legislation.GOV.UK (http://legislation.gov.uk/) in an unprocessed format.\ The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).\ For more information on the dataset curation, read Chalkidis et al. (2021). #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process * The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) and Legislation.GOV.UK (http://legislation.gov.uk/) in an unprocessed format. * The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql). #### Who are the annotators? Publications Office of EU (https://publications.europa.eu/en) ### Personal and Sensitive Information The dataset does not include personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Chalkidis et al. (2021) ### Licensing Information **EU Data** © European Union, 1998-2021 The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes. The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence​​ . This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \ Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html **UK Data** You are encouraged to use and re-use the Information that is available under this licence freely and flexibly, with only a few conditions. You are free to: - copy, publish, distribute and transmit the Information; - adapt the Information; - exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. You must (where you do any of the above): acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/. ### Citation Information *Ilias Chalkidis, Manos Fergadiotis, Nikos Manginas, Eva Katakalou and Prodromos Malakasiotis.* *Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations* *Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021). Online. 2021* ``` @inproceedings{chalkidis-etal-2021-regir, title = "Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations", author = "Chalkidis, Ilias and Fergadiotis, Manos and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2101.10726", } ``` ### Contributions Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
true
# Dataset Card for MultiBooked ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://hdl.handle.net/10230/33928 - **Repository:** https://github.com/jerbarnes/multibooked - **Paper:** https://arxiv.org/abs/1803.08614 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary MultiBooked is a corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification. The corpora are compiled from hotel reviews taken mainly from booking.com. The corpora are in Kaf/Naf format, which is an xml-style stand-off format that allows for multiple layers of annotation. Each review was sentence- and word-tokenized and lemmatized using Freeling for Catalan and ixa-pipes for Basque. Finally, for each language two annotators annotated opinion holders, opinion targets, and opinion expressions for each review, following the guidelines set out in the OpeNER project. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Each sub-dataset is monolingual in the languages: - ca: Catalan - eu: Basque ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `text`: layer of the original text. - `wid`: list of word IDs for each word within the example. - `sent`: list of sentence IDs for each sentence within the example. - `para`: list of paragraph IDs for each paragraph within the example. - `word`: list of words. - `terms`: layer of the terms resulting from the analysis of the original text (lemmatization, morphological, PoS tagging) - `tid`: list of term IDs for each term within the example. - `lemma`: list of lemmas. - `morphofeat`: list of morphological features. - `pos`: list of PoS tags. - `target`: list of sublists of the corresponding word IDs (normally, the sublists contain only one element, in a one-to-one correspondence between words and terms). - `opinions`: layer of the opinions in the text. - `oid`: list of opinion IDs - `opinion_holder_target`: list of sublists of the corresponding term IDs that span the opinion holder. - `opinion_target_target`: list of sublists of the corresponding term IDs that span the opinion target. - `opinion_expression_polarity`: list of the opinion expression polarities. The polarity can take one of the values: `StrongNegative`, `Negative`, `Positive`, or `StrongPositive`. - `opinion_expression_target`: list of sublists of the corresponding term IDs that span the opinion expression. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Dataset is under the [CC-BY 3.0](https://creativecommons.org/licenses/by/3.0/) license. ### Citation Information ``` @inproceedings{Barnes2018multibooked, author={Barnes, Jeremy and Lambert, Patrik and Badia, Toni}, title={MultiBooked: A corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18)}, year = {2018}, month = {May}, date = {7-12}, address = {Miyazaki, Japan}, publisher = {European Language Resources Association (ELRA)}, language = {english} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
true
# Dataset Card for WMT20 - MultiLingual Quality Estimation (MLQE) Task2 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WMT20 Quality Estimation Shared Task](http://www.statmt.org/wmt20/quality-estimation-task.html) - **Repository**: [Github repository](https://github.com/deep-spin/deep-spin.github.io/tree/master/docs/data/wmt2020_qe) - **Paper:** *Not available* ### Dataset Summary From the homepage: *This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task.* *Task 1 evaluates the application of QE for post-editing purposes. It consists of predicting:* - ***Word-level tags.*** *This is done both on source side (to detect which words caused errors) and target side (to detect mistranslated or missing words).* - ***Target.*** *Each token is tagged as either `OK` or `BAD`. Additionally, each gap between two words is tagged as `BAD` if one or more missing words should have been there, and `OK` otherwise. Note that number of tags for each target sentence is 2*N+1, where N is the number of tokens in the sentence.* - ***Source.*** *Tokens are tagged as `OK` if they were correctly translated, and `BAD` otherwise. Gaps are not tagged.* - ***Sentence-level HTER scores.*** *HTER (Human Translation Error Rate) is the ratio between the number of edits (insertions/deletions/replacements) needed and the reference translation length.* ### Supported Tasks and Leaderboards From the homepage: *For sentence-level QE, submissions are evaluated in terms of the Pearson's correlation metric for the sentence-level HTER prediction. For word-level QE, they will be evaluated in terms of MCC ([Matthews correlation coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)). These are the [official evaluation scripts](https://github.com/sheffieldnlp/qe-eval-scripts).* ### Languages There are two language pairs in this dataset: - English - German (`en` - `de`) - German - Chinese (`en` - `zh`) ## Dataset Structure ### Data Instances An example looks like this: ``` { 'translation': { 'en': 'favorite fish include cod , salmon , winter flounder , haddock , striped bass , pollock , hake , bluefish , and , in southern New England , Tautog .', 'de': 'zu den Lieblingsfischen gehören Kabeljau , Lachs , Winterflounder , Schellfisch , gestreifter Bass , Pollock , Seehecht , Rotbarsch und in Südengland Tautog .', } 'src_tags': [1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1], 'mt_tags': [1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1], 'pe': 'zu den Lieblingsfischen zählen Kabeljau , Lachs , Winterflunder , Schellfisch , Wolfsbarsch , Pollock , Seehecht , Bluefish und im Süden Neuenglands Tautog .', 'hter': 0.3199999928474426, 'alignments': [[2, 0], [2, 1], [2, 3], [3, 2], [3, 4], [4, 5], [5, 6], [6, 5], [7, 6], [8, 6], [9, 7], [10, 8], [10, 10], [11, 9], [12, 12], [13, 13], [14, 11], [15, 12], [15, 15], [16, 14], [17, 17], [19, 16], [20, 16], [21, 20], [22, 18], [23, 19], [23, 21], [24, 22], [25, 21], [26, 22], [27, 22], [28, 23], [29, 24]], } ``` ### Data Fields - `translation`: Dictionary with pairs (source,target). - src_lg: sequence of text in source language. - tgt_lg: sequence of text in target language. - `src_tags`: source word-level tags. `0`=`BAD`, `1`=`OK`. `[]` if N/A (only for test). - `mt_tags`: target word-level tags. `0`=`BAD`, `1`=`OK`. `[]` if N/A (only for test). - `pe`: post-edited version of NMT output. `""` if N/A (only for test). - `hter`: human translation error rate. `-10_000` if N/A (only for test). - `alignments`: Word aligments. List of pairs of integers. ### Data Splits There are 2 configurations in this dataset (one for each available language pair). Each configuration is composed of 7K examples for training, 1K for validation and 1K for (blind) test. ## Dataset Creation ### Curation Rationale The original text is extracted from Wikipedia. From the homepage: *Word-level labels have been obtained by using the alignments provided by the [TER](http://www.cs.umd.edu/~snover/tercom/) tool (settings: tokenised, case insensitive, exact matching only, disabling shifts by using the `-d 0` option) between machine translations and their post-edited versions. Shifts (word order errors) were not annotated as such (but rather as deletions + insertions) to avoid introducing noise in the annotation.* *HTER values are obtained deterministically from word-level tags. However, when computing HTER, we allow shifts in TER.* *The baseline system is a neural predictor-estimator approach implemented in [OpenKiwi](https://github.com/Unbabel/OpenKiwi) ([Kepler at al., 2019](https://arxiv.org/abs/1902.08646)), where the predictor model will be trained on the parallel data used to train the NMT model.* ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown ### Citation Information ``` Not available. ``` ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
true
# Offensive language dataset of Croatian comments FRENK 1.0 Croatian subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [English subset](https://huggingface.co/datasets/5roop/FRENK-hate-en), [Slovenian subset](https://huggingface.co/datasets/5roop/FRENK-hate-sl). ## Dataset Description - **Homepage:** http://hdl.handle.net/11356/1433 - **Repository:** http://hdl.handle.net/11356/1433 - **Paper:** https://arxiv.org/abs/1906.02045 - **Project page** https://nl.ijs.si/frenk/ ## Description of the original dataset >The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments. > >The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments. For this dataset only the Croatian data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split). Test segment has been preserved in its original form. ## Usage in `Transformers` ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-hr","binary") ``` For binary classification the following encoding is used: ```python _CLASS_MAP_BINARY = { 'Acceptable': 0, 'Offensive': 1, } ``` The original labels are available if the dataset is loaded with the `multiclass` option: ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-hr","multiclass"). ``` In this case the encoding used is: ```python _CLASS_MAP_MULTICLASS = { 'Acceptable speech': 0, 'Inappropriate': 1, 'Background offensive': 2, 'Other offensive': 3, 'Background violence': 4, 'Other violence': 5, } ``` ## Data structure * `text`: text * `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic)) * `topic`: whether the text relates to lgbt or migrants hate-speech domains * `label`: label of the text instance, see above. ## Data instance ``` {'text': 'Potpisujem komentar g ankice pavicic', 'target': 'No target', 'topic': 'lgbt', 'label': 0} ``` ## Licensing information CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0 ## Citation information When using this dataset please cite the following paper: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ``` The original dataset can be cited as ``` @misc{11356/1433, title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0}, author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}}, url = {http://hdl.handle.net/11356/1433}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0}, year = {2021} } ```
false
# Dataset Card for ogbg-molhiv ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://ogb.stanford.edu/docs/graphprop/#ogbg-mol)** - **[Repository](https://github.com/snap-stanford/ogb):**: - **Paper:**: Open Graph Benchmark: Datasets for Machine Learning on Graphs (see citation) - **Leaderboard:**: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molhiv) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molhiv) ### Dataset Summary The `ogbg-molhiv` dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark. ### Supported Tasks and Leaderboards `ogbg-molhiv` should be used for molecular property prediction (aiming to predict whether molecules inhibit HIV or not), a binary classification task. The score used is ROC-AUC. The associated leaderboards are here: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molhiv) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molhiv). ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader ogbg_molhiv = load_dataset("graphs-datasets/ogbg-molhiv") # For the train set (replace by valid or test as needed) ogbg_molhiv_pg_list = [Data(graph) for graph in ogbg_molhiv["train"]] ogbg_molhiv_pg = DataLoader(ogbg_molhiv_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | small | | #graphs | 41,127 | | average #nodes | 25.5 | | average #edges | 27.5 | | average node degree | 2.2 | | average cluster coefficient | 0.002 | | MaxSCC ratio | 0.993 | | graph diameter | 12.0 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) - `num_nodes` (int): number of nodes of the graph ### Data Splits This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits. This information can be found back using ```python from ogb.graphproppred import PygGraphPropPredDataset dataset = PygGraphPropPredDataset(name = 'ogbg-molhiv') split_idx = dataset.get_idx_split() train = dataset[split_idx['train']] # valid, test ``` ## Additional Information ### Licensing Information The dataset has been released under MIT license. ### Citation Information ``` @inproceedings{hu-etal-2020-open, author = {Weihua Hu and Matthias Fey and Marinka Zitnik and Yuxiao Dong and Hongyu Ren and Bowen Liu and Michele Catasta and Jure Leskovec}, editor = {Hugo Larochelle and Marc Aurelio Ranzato and Raia Hadsell and Maria{-}Florina Balcan and Hsuan{-}Tien Lin}, title = {Open Graph Benchmark: Datasets for Machine Learning on Graphs}, booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual}, year = {2020}, url = {https://proceedings.neurips.cc/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html}, } ``` ### Contributions Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
true
# Health Advice ## Dataset Description - **Paper:** https://experts.syr.edu/en/publications/detecting-causal-language-use-in-science-findings ### Dataset Summary This is the dataset use in the paper: Detecting Causal Language Use in Science Findings. It was cleaned and formated to fit into the alpaca template. ### Citation Information ``` @inproceedings{yu-etal-2019-detecting, title = "Detecting Causal Language Use in Science Findings", author = "Yu, Bei and Li, Yingya and Wang, Jun", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D19-1473", doi = "10.18653/v1/D19-1473", pages = "4664--4674", } ```
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://zil.ipipan.waw.pl/Scwad/CDSCorpus - **Repository:** - **Paper:** @inproceedings{wroblewska2017polish, title={Polish evaluation dataset for compositional distributional semantics models}, author={Wr{\'o}blewska, Alina and Krasnowska-Kiera{\'s}, Katarzyna}, booktitle={Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={784--792}, year={2017} } - **Leaderboard:** https://klejbenchmark.com/leaderboard/ - **Point of Contact:** alina@ipipan.waw.pl ### Dataset Summary Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - pair_ID: id of sentences pairs - sentence_A: first sentence - sentence_B: second sentence for cdsc-e domain: - entailment_judgment: either 'NEUTRAL', 'CONTRADICTION' or 'ENTAILMENT' for cdsc-r domain: - relatedness_score: float representing a reletedness ### Data Splits Data is splitted in train/dev/test split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information [More Information Needed] ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
false
# Dataset Card for KorNER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/kmounlp/NER) - **Repository:** [Github](https://github.com/kmounlp/NER) - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Each row consists of the following fields: - `text`: The full text, as is - `annot_text`: Annotated text including POS-tagged information - `tokens`: An ordered list of tokens from the full text - `pos_tags`: Part-of-speech tags for each token - `ner_tags`: Named entity recognition tags for each token Note that by design, the length of `tokens`, `pos_tags`, and `ner_tags` will always be identical. `pos_tags` corresponds to the list below: ``` ['SO', 'SS', 'VV', 'XR', 'VCP', 'JC', 'VCN', 'JKB', 'MM', 'SP', 'XSN', 'SL', 'NNP', 'NP', 'EP', 'JKQ', 'IC', 'XSA', 'EC', 'EF', 'SE', 'XPN', 'ETN', 'SH', 'XSV', 'MAG', 'SW', 'ETM', 'JKO', 'NNB', 'MAJ', 'NNG', 'JKV', 'JKC', 'VA', 'NR', 'JKG', 'VX', 'SF', 'JX', 'JKS', 'SN'] ``` `ner_tags` correspond to the following: ``` ["I", "O", "B_OG", "B_TI", "B_LC", "B_DT", "B_PS"] ``` The prefix `B` denotes the first item of a phrase, and an `I` denotes any non-initial word. In addition, `OG` represens an organization; `TI`, time; `DT`, date, and `PS`, person. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
true
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://www.inf.ufrgs.br/~rppelle/hatedetector/ - **Repository:** https://github.com/rogersdepelle/OffComBR - **Paper:** https://sol.sbc.org.br/index.php/brasnam/article/view/3260/3222 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary OffComBR: an annotated dataset containing for hate speech detection in Portuguese composed of news comments on the Brazilian Web. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
false
# Dataset Card for Stack Exchange ## Table of Contents - [Dataset Card for Stack Exchange](#dataset-card-for-the_pile_stack_exchange) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [|split|num examples|](#splitnum-examples) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/EleutherAI/stackexchange-dataset) - **Repository:** [Needs More Information] - **Paper:** [arXiv](https://arxiv.org/abs/2101.00027) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This dataset is part of EleutherAI/The Pile dataset and is a dataset for Language Models from processing stackexchange data dump, which is an anonymized dump of all user-contributed content on the Stack Exchange network. |download_size|34.28 Gib| |dataset_size|10.3 Gib| ### Supported Tasks and Leaderboards The dataset is used for Language Modeling. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances ``` {'domain': 'chemistry', 'text':"\nQ: \n \nReviving old questions or asking a new one? \n \nI'm relatively new to the Chemistry SE community, and sometimes when I go to ask a question, I notice that the same (or similar) question has \nalready been asked. However, the previous question doesn't have a good answer (or is unanswered). In this case, is it better to ask the questi\non again in a new post (which might be marked as duplicate) or comment on the old post (which might be several years old)? In other words, wha\nt are the customs of this site in regards to reviving old questions/discussions?\n\nA:\n\nAs Martin commented, it really depends on the type of question. In any case, you always have the following possibilities:\n\nAsk a new question\nEdit the question to bump it to the first page\nAdd a bounty\nBring it to the attention of people in chat\n\nConsider the following cases:\n\nI have exactly the same question as asked and unanswered before!\n\nIf you ask a new question which turns out to be the same question, it may be closed as a dupe (depending on whether users remember the old que\nstion). Not the ideal option.\nIf you can find something substantial to edit and bump the question, do so. Maybe add a comment that you would really love an answer.\nIf you can spare some rep for a bounty (50 is usually enough), do so.\nYou can always bring it to the attention of people in chat.\n",} ``` ### Data Fields - `domain`: Stack Exchange domain of the sample - `text`: Text content containing both the question and the answer ### Data Splits |split|num examples| -------------------------------- |train|5096117| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` @article{pile, title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ``` ### Contributions Thanks to [sdtblck](https://github.com/sdtblck) for creating the dataset. Thanks to [richarddwang](https://github.com/richarddwang) for adding the dataset.
true
# Dataset Card for MOROCO ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/butnaruandrei/MOROCO) - **Repository:** [Github](https://github.com/butnaruandrei/MOROCO) - **Paper:** [Arxiv](https://arxiv.org/abs/1901.06543) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [email](raducu.ionescu@gmail.com) ### Dataset Summary Introducing MOROCO - The **Mo**ldavian and **Ro**manian Dialectal **Co**rpus. The MOROCO data set contains Moldavian and Romanian samples of text collected from the news domain. The samples belong to one of the following six topics: (0) culture, (1) finance, (2) politics, (3) science, (4) sports, (5) tech. The corpus features a total of 33,564 samples labelled with one of the fore mentioned six categories. We are also including a train/validation/test split with 21,719/5,921/5,924 samples in each subset. ### Supported Tasks and Leaderboards [LiRo Benchmark and Leaderboard](https://eemlcommunity.github.io/ro_benchmark_leaderboard/site/) ### Languages The text dataset is in Romanian (`ro`) ## Dataset Structure ### Data Instances Below we have an example of sample from MOROCO: ``` {'id': , '48482', 'category': 2, 'sample': '“$NE$ cum am spus, nu este un sfârşit de drum . Vom continua lupta cu toate instrumentele şi cu toate mijloacele legale, parlamentare şi civice pe care le avem la dispoziţie . Evident că vom contesta la $NE$ această lege, au anunţat şi colegii de la $NE$ o astfel de contestaţie . Practic trebuie utilizat orice instrument pe care îl identificăm pentru a bloca intrarea în vigoare a acestei legi . Bineînţeles, şi preşedintele are punctul său de vedere . ( . . . ) $NE$ legi sunt împănate de motive de neconstituţionalitate . Colegii mei de la departamentul juridic lucrează în prezent pentru a definitiva textul contestaţiei”, a declarat $NE$ $NE$ citat de news . ro . Senatul a adoptat, marţi, în calitate de for decizional, $NE$ privind statutul judecătorilor şi procurorilor, cu 80 de voturi ”pentru” şi niciun vot ”împotrivă”, în condiţiile în care niciun partid din opoziţie nu a fost prezent în sală .', } ``` where 48482 is the sample ID, followed by the category ground truth label, and then the text representing the actual content to be classified by topic. Note: The category label has integer values ranging from 0 to 5. ### Data Fields - `id`: string, the unique indentifier of a sample - `category_label`: integer in the range [0, 5]; the category assigned to a sample. - `sample`: a string, news report to be classified / used in classification. ### Data Splits The train/validation/test split contains 21,719/5,921/5,924 samples tagged with the category assigned to each sample in the dataset. ## Dataset Creation ### Curation Rationale The samples are preprocessed in order to eliminate named entities. This is required to prevent classifiers from taking the decision based on features that are not related to the topics. For example, named entities that refer to politicians or football players names can provide clues about the topic. For more details, please read the [paper](https://arxiv.org/abs/1901.06543). ### Source Data #### Data Collection and Normalization For the data collection, five of the most popular news websites in Romania and the Republic of Moldova were targetted. Given that the data set was obtained through a web scraping technique, all the HTML tags needed to be removed, as well as replace consecutive white spaces with a single space. As part of the pre-processing, we remove named entities, such as country names, cities, public figures, etc. The named entities have been replaced with $NE$. The necessity to remove them, comes also from the scope of this dataset: categorization by topic. Thus, the authors decided to remove named entities in order to prevent classifiers from taking the decision based on features that are not truly indicative of the topics. #### Who are the source language producers? The original text comes from news websites from Romania and the Republic of Moldova. ### Annotations #### Annotation process As mentioned above, MOROCO is composed of text samples from the top five most popular news websites in Romania and the Republic of Moldova, respectively. Since there are topic tags in the news websites targetd, the text samples can be automatically labeled with the corresponding category. #### Who are the annotators? N/A ### Personal and Sensitive Information The textual data collected for MOROCO consists in news reports freely available on the Internet and of public interest. To the best of authors' knowledge, there is no personal or sensitive information that needed to be considered in the said textual inputs collected. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures. In the past three years there was a growing interest for studying Romanian from a Computational Linguistics perspective. However, we are far from having enough datasets and resources in this particular language. ### Discussion of Biases The data included in MOROCO spans a well defined time frame of a few years. Part of the topics that were of interest then in the news landscape, might not show up nowadays or a few years from now in news websites. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Published and managed by Radu Tudor Ionescu and Andrei Butnaru. ### Licensing Information CC BY-SA 4.0 License ### Citation Information ``` @inproceedings{ Butnaru-ACL-2019, author = {Andrei M. Butnaru and Radu Tudor Ionescu}, title = "{MOROCO: The Moldavian and Romanian Dialectal Corpus}", booktitle = {Proceedings of ACL}, year = {2019}, pages={688--698}, } ``` ### Contributions Thanks to [@MihaelaGaman](https://github.com/MihaelaGaman) for adding this dataset.
true
# Dataset Card for `prachathai67k` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/PyThaiNLP/prachathai-67k - **Repository:** https://github.com/PyThaiNLP/prachathai-67k - **Paper:** - **Leaderboard:** - **Point of Contact:** https://github.com/PyThaiNLP/ ### Dataset Summary `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb). This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**: * `การเมือง` - politics * `สิทธิมนุษยชน` - human_rights * `คุณภาพชีวิต` - quality_of_life * `ต่างประเทศ` - international * `สังคม` - social * `สิ่งแวดล้อม` - environment * `เศรษฐกิจ` - economics * `วัฒนธรรม` - culture * `แรงงาน` - labor * `ความมั่นคง` - national_security * `ไอซีที` - ict * `การศึกษา` - education ### Supported Tasks and Leaderboards multi-label text classification, language modeling ### Languages Thai ## Dataset Structure ### Data Instances {'body_text': '17 พ.ย. 2558 Blognone [1] รายงานว่า กลุ่มแฮคเกอร์ Anonymous ประกาศสงครามไซเบอร์กับกลุ่มหัวรุนแรงหลังจากกลุ่ม IS ออกมาประกาศว่าเป็นผู้อยู่เบื้องหลังการโจมตีกรุงปารีสในคืนวันศุกร์ที่ผ่านมา\n\n\nภาพในคลิปใน YouTube โฆษกของกลุ่มแฮคเกอร์สวมหน้ากากที่เป็นสัญลักษณ์ของกลุ่มได้ออกมาอ่านแถลงเป็นภาษาฝรั่งเศส มีใจความว่า จากการโจมตีของกลุ่ม IS ในกรุงปารีส กลุ่ม Anonymous ทั่วโลกจะตามล่ากลุ่ม IS เหมือนที่เคยทำตอนที่มีการโจมตีสำนักพิมพ์ Charlie Hebdo และครั้งนี้จะเป็นปฏิบัติการโจมตีครั้งใหญ่ที่สุดของกลุ่ม Anonymous เลย นอกจากนี้กลุ่ม Anonymous ยังแสดงความเสียใจต่อครอบครัวผู้สูญเสียในเหตุการณ์ครั้งนี้\nกลุ่ม Anonymous เคยประกาศสงครามกับกลุ่ม IS หลังจากการโจมตีสำนักพิมพ์ Charlie Hebdo ที่ฝรั่งเศสเมื่อต้นปีที่ผ่านมา ซึ่งครั้งนั้นกลุ่ม Anonymous อ้างว่าได้ระงับบัญชีผู้ใช้งานที่เกี่ยวข้องกับ IS ไปหลายพันบัญชี (อ่านรายละเอียดเพิ่มเติม จากBlognone ที่\xa0\xa0กลุ่มแฮคเกอร์ Anonymous ประกาศสงครามไซเบอร์ขอกวาดล้างพวก ISIS [2])', 'culture': 0, 'date': '2015-11-17 18:14', 'economics': 0, 'education': 0, 'environment': 0, 'human_rights': 0, 'ict': 1, 'international': 1, 'labor': 0, 'national_security': 0, 'politics': 0, 'quality_of_life': 0, 'social': 0, 'title': 'แฮคเกอร์ Anonymous ลั่นทำสงครามไซเบอร์ครั้งใหญ่สุดกับกลุ่ม IS', 'url': 'https://prachatai.com/print/62490'} {'body_text': 'แถลงการณ์\n\n\xa0\n\nองค์การนักศึกษามหาวิทยาลัยธรรมศาสตร์\n\n\xa0\n\nมหาวิทยาลัยธรรมศาสตร์ก่อตั้งขึ้นภายใต้แนวคิดการให้การศึกษากับประชาชนเพื่อสนับสนุนการปกครองระบอบประชาธิปไตย อีกทั้งยังเป็นสถาบันหนึ่งที่อยู่เคียงข้างประชาชนมาโดยตลอด\n\n\xa0\n\nสถานการณ์สังคมไทยปัจจุบันได้เกิดความขัดแย้งทางการเมือง ทางแนวคิด จนลุกลามเป็นวิกฤตการณ์อันหาทางออกได้ยากยิ่ง องค์กรนักศึกษามหาวิทยาลัยธรรมศาสตร์ขอร้องเรียนและเสนอแนะต่อทุกฝ่าย โดยยึดหลักแนวทางตามรัฐธรรมนูญแห่งราชอาณาจักรไทย พ.ศ. ๒๕๕๐ อันเป็นกฎหมายสูงสุดในการจัดการปกครองรัฐ ที่มีผลบังคับใช้อยู่ในปัจจุบันซึ่งผ่านการประชามติจากปวงชนชาวไทยเมื่อวันที่ ๑๙ สิงหาคม พ.ศ. ๒๕๕๐ แล้วดังต่อนี้\n\n\xa0\n\n๑.การชุมชมโดยสงบและปราศจากอาวุธย่อมได้รับการคุ้มครองตามรัฐธรรมนูญ แต่หากการชุมนุมและเคลื่อนไหวของกลุ่มใดๆ มีการละเมิดสิทธิและเสรีภาพของผู้อื่นหรือก่อให้เกิดความเสียหายต่อชีวิตและทรัพย์สินของบุคคลและส่วนรวมนั้น ไม่สามารถกระทำได้ การใช้ความรุนแรง การกระทำอุกอาจต่างๆ ทั้งต่อบุคคลและทรัพย์สิน การยั่วยุ ปลุกระดมเพื่อหวังผลในการปะทะต่อสู้ จึงควรได้รับการกล่าวโทษ\n\n\xa0\n\nดังนั้นทั้งกลุ่มพันธมิตรประชาชนเพื่อประชาธิปไตย (พธม.) และกลุ่มแนวร่วมประชาธิปไตยไม่เอาเผด็จการแห่งชาติ (นปช.) จึงควรยอมรับกระบวนการตามกฎหมาย และหากถูกกล่าวหาไม่ว่ากรณีใดๆ ก็ควรพิสูจน์ความบริสุทธิ์โดยใช้กระบวนการยุติธรรม และหากจะยังชุมนุมต่อไปก็ยังคงทำได้ภายใต้บทบัญญัติแห่งกฎหมาย\n\n\xa0\n\nองค์กรนักศึกษามหาวิทยาลัยธรรมศาสตร์ จึงร้องขอให้หน่วยงานต่างๆ ที่เกี่ยวข้องดำเนินการตามกระบวนการทางกฎหมายกับการกระทำที่ผิดบทบัญญัติแห่งกฎหมายที่ทุกฝ่ายได้กระทำไป\n\n\xa0\n\n๒.นายสมัคร สุนทรเวช นายกรัฐมนตรี ไม่มีความเหมาะสมในการบริหารราชการแผ่นดินขาดหลักธรรมาภิบาล แต่ทั้งนี้นายสมัคร สุนทรเวช ยังคงยืนยันและกล่าวอ้างความชอบธรรมตามระบอบประชาธิปไตยภายใต้รัฐธรรมนูญ โดยไม่คำนึงถึงกระแสเรียกร้องใดๆ อันส่งผลให้ความขัดแย้งทางสังคมยิ่งบานปลายจนกลายเป็นวิกฤตการณ์เช่นปัจจุบัน ซึ่งก่อให้เกิดความเสียหายต่อประเทศแนวโน้มจะคลี่คลาย\n\n\xa0\n\nองค์การนักศึกษามหาวิทยาลัยธรรมศาสตร์ จึงเห็นว่า ควรใช้สิทธิตามรัฐธรรมนูญแห่งราชอาณาจักรไทย พุทธศักราช ๒๕๕๐ มาตรา ๑๖๔ โดยการเข้าชื่อเพื่อร้องต่อประธานวุฒิสภาเพื่อให้มีมติตามมาตรา ๒๗๔ ให้ถอดถอนนายสมัคร สุนทรเวช ออกจากตำแหน่งนายกรัฐมนตรีตามมาตรา ๒๗๐ ณ ลานโพ มหาวิทยาลัยธรรมศาสตร์ ท่าพระจันทร์ อาคารเรียนรวมสังคมศาสตร์ อาคารปิยชาติ และตึกกิจกรรมนักศึกษา มหาวิทยาลัยธรรมศาสตร์ ศูนย์รังสิต\n\n\xa0\n\n\xa0\n\nด้วยความสมานฉันท์\n\nองค์การนักศึกษามหาวิทยาลัยธรรมศาสตร์', 'culture': 0, 'date': '2008-09-06 03:36', 'economics': 0, 'education': 0, 'environment': 0, 'human_rights': 0, 'ict': 0, 'international': 0, 'labor': 0, 'national_security': 0, 'politics': 1, 'quality_of_life': 0, 'social': 0, 'title': 'แถลงการณ์ อมธ.แนะใช้สิทธิ ตาม รธน.เข้าชื่อร้องต่อประธานวุฒิสภาถอดถอน "สมัคร" จากตำแหน่งนายกฯ', 'url': 'https://prachatai.com/print/18038'} ### Data Fields - `url`: url of the article - `date`: date the article was published - `title`: title of the article - `body_text`: body text of the article - `politics`: 1 if sample has this tag else 0 - `human_rights`: 1 if sample has this tag else 0 - `quality_of_life`: 1 if sample has this tag else 0 - `international`: 1 if sample has this tag else 0 - `social`: 1 if sample has this tag else 0 - `environment`: 1 if sample has this tag else 0 - `economics`: 1 if sample has this tag else 0 - `culture`: 1 if sample has this tag else 0 - `labor`: 1 if sample has this tag else 0 - `national_security`: 1 if sample has this tag else 0 - `ict`: 1 if sample has this tag else 0 - `education`: 1 if sample has this tag else 0 ### Data Splits | | train | valid | test | |-------------------|-------|--------|------| | # articles | 54379 | 6721 | 6789 | | politics | 31401 | 3852 | 3842 | | human_rights | 12061 | 1458 | 1511 | | quality_of_life | 9037 | 1144 | 1127 | | international | 6432 | 828 | 834 | | social | 6321 | 782 | 789 | | environment | 6157 | 764 | 772 | | economics | 3994 | 487 | 519 | | culture | 3279 | 388 | 398 | | labor | 2905 | 375 | 350 | | national_security | 2865 | 339 | 338 | | ict | 2326 | 285 | 292 | | education | 2093 | 248 | 255 | ## Dataset Creation ### Curation Rationale The data was scraped from the news site [Prachathai](prachathai.com) from August 24, 2004 to November 15, 2018. The initial intention was to use the dataset as a benchmark for Thai text classification. Due to the size of the dataset, it can also be used for language modeling. ### Source Data #### Initial Data Collection and Normalization 67,889 articles wtih 51,797 tags were scraped from the news site [Prachathai](prachathai.com) from August 24, 2004 to November 15, 2018. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. #### Who are the source language producers? Prachathai.com ### Annotations #### Annotation process Tags are annotated for the news website Prachathai.com #### Who are the annotators? We assume that the reporters who wrote the articles or other Prachathai staff gave each article its tags. ### Personal and Sensitive Information We do not expect any personal and sensitive information to be present since all data are public news articles. ## Considerations for Using the Data ### Social Impact of Dataset - classification benchmark for multi-label Thai text classification ### Discussion of Biases Prachathai.com is a left-leaning, human-right-focused news site, and thus unusual news labels such as human rights and quality of life. The news articles are expected to be left-leaning in contents. ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators PyThaiNLP ### Licensing Information CC-BY-NC ### Citation Information @misc{prachathai67k, author = {cstorm125, lukkiddd }, title = {prachathai67k}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished={\\url{https://github.com/PyThaiNLP/prachathai-67k}}, } ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
false
# Dataset Card for [Danish Gigaword (no Twitter)] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://gigaword.dk - **Paper:** http://www.derczynski.com/papers/dagw.pdf ### Dataset Summary The Danish Gigaword Corpus contains text spanning several domains and forms. This version does *not* include the sections containing Tweets. ### Supported Tasks and Leaderboards Pre-training of language models. ### Language Danish ## Dataset Structure The dataset contains text from 23 different sources which are thoroughly defined in [Source Data](#source-data). See the [homepage](https://gigaword.dk) or [paper](http://www.derczynski.com/papers/dagw.pdf) for more information. ### Data Instances Each entry in the dataset consists of a single text with associated metadata ### Data Fields An entry in the dataset consists of the following fields: - `text`(`str`): The content of the document. - `source` (`str`): The source of the document (see [Source Data](#source-data)). - `doc_id` (`str`): An unique identifer for each document. - `LICENSE` (`str`): The license of the document. The licenses vary according to the source. - `uri` (`str`): The uri of the document. Not available for all sources. - `data_built`(`str`): Date the document was built. Not avaialable for all sources. ### Data Splits The entire corpus is provided in the `train` split. ## Dataset Creation ### Source Data Below follows a brief overview of the sources in the corpus along with their individual license. | Source | License | | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | adl | Creative Commons Legal Code 1.0 Universal | | botxt | Creative Commons Legal Code 1.0 Universal | | cc | Creative Commons Legal Code 1.0 Universal | | danavis | Creative Commons Legal Code 1.0 Universal | | dannet | [dannet license](https://cst.ku.dk/projekter/dannet/license.txt) | | depbank | Attribution-ShareAlike 4.0 International | | ep | Creative Commons Legal Code 1.0 Universal | | ft | Creative Commons Legal Code 1.0 Universal | | gutenberg | [gutenberg license](https://www.gutenberg.org/policy/license.html) | | hest | Creative Commons Legal Code 1.0 Universal | | jvj | Attribution-ShareAlike 4.0 International | | naat | Creative Commons Legal Code 1.0 Universal | | opensub | The data set comes with the same license as the original sources. Please, check the information about the source that is given on http://opus.nlpl.eu/OpenSubtitles-v2018.php | | relig | Creative Commons Legal Code 1.0 Universal | | retsinformationdk | Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states "§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret. Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler." | | retspraksis | Creative Commons Legal Code 1.0 Universal | | skat | Creative Commons Legal Code 1.0 Universal | | spont | Creative Commons Legal Code 1.0 Universal | | synne | Creative Commons Legal Code 1.0 Universal | | tv2r | The owner of this content is TV2 Regionerne, Denmark. Creative Commons Attribution 4.0 International | | wiki | Creative Commons Legal Code 1.0 Universal | | wikibooks | Creative Commons Legal Code 1.0 Universal | | wikisource | Creative Commons Legal Code 1.0 Universal | These sources corresponds to the following top-level domains in the dataset: ```python # mapping from domain to top-level domain domain_mapping_dict = { "retsinformationdk": "Legal", "skat": "Legal", "retspraksis": "Legal", "hest": "Social Media", "cc": "Web", "adl": "Wiki & Books", "botxt": "Other", "danavis": "News", "dannet": "dannet", "depbank": "Other", "ep": "Conversation", "ft": "Conversation", "gutenberg": "Wiki & Books", "jvj": "Wiki & Books", "naat": "Conversation", "opensub": "Conversation", "relig": "Wiki & Books", "spont": "Conversation", "synne": "Other", "tv2r": "News", "wiki": "Wiki & Books", "wikibooks": "Wiki & Books", "wikisource": "Wiki & Books", "twfv19": "Social Media", # not present in this version of the dataset } ``` And the following mapping translates between the short form and the long form of the source name ```python # mapping from domain to its long name format longname_mapping_dict = { "retsinformationdk": "retsinformation.dk (Danish legal information)", "skat": "Skat (Danish tax authority)", "retspraksis": "retspraksis (Danish legal information)", "hest": "Hestenettet (Danish debate forum)", "cc": "Common Crawl", "adl": " Archive for Danish Literature", "botxt": "Bornholmsk (Danish dialect)", "danavis": "Danish daily newspapers", "dannet": "DanNet (Danish WordNet)", "depbank": "Danish Dependency Treebank", "ep": "European Parliament", "ft": "Folketinget (Danish Parliament)", "gutenberg": "Gutenberg", "jvj": "Johannes V. Jensen (Danish poet)", "naat": "NAAT", "opensub": "Open Subtitles", "relig": "Religious texts", "spont": "Spontaneous speech", "synne": "Synderjysk (Danish dialect)", "tv2r": "TV 2 Radio (Danish news)", "wiki": "Wikipedia", "wikibooks": "Wikibooks", "wikisource": "Wikisource", "twfv19": "Twitter Folketingsvalget 2019 (Danish election tweets)", # not present in this version of the dataset } ``` ## Additional Information ### Licensing Information If you use the data, you MUST acknowledge it. The license is CC-BY 4.0, Creative Commons with Attribution. ### Citation Information Sample attributions: In a press release: > Modellen er præ-trænet på et datasæt fra The Danish Gigaword Project (https://gigaword.dk), der er udviklet af forskere fra IT-Universitetet i København > The model is pre-trained using the Danish Gigaword Corpus (https://gigaword.dk), developed at the IT University of Copenhagen In academic writing: ``` Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021). @inproceedings{dagw, title = {{The Danish Gigaword Corpus}}, author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab}, year = 2021, booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics}, publisher = {NEALT} } ``` In a software product, tool, or service: > Danish Gigaword Corpus: license - homepage > Denne service er lavet med data fra The Danish Gigaword Corpus ### Contributions Dataset created by Derczynski et al. (2021) Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021). Thanks to [@HLasse](https://github.com/HLasse) for adding this dataset to the Hugging Face Hub.
false
# Dataset Card for GEM/common_gen ## Dataset Description - **Homepage:** https://inklab.usc.edu/CommonGen/ - **Repository:** https://github.com/INK-USC/CommonGen - **Paper:** https://aclanthology.org/2020.findings-emnlp.165 - **Leaderboard:** https://inklab.usc.edu/CommonGen/leaderboard.html - **Point of Contact:** Bill Yuchen Lin ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/common_gen). ### Dataset Summary CommonGen is an English text generation task to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts, the task is to generate a coherent sentence describing an everyday scenario using these concepts. CommonGen is challenging because it inherently requires 1) relational reasoning using background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. The dataset, constructed through a combination of crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and 50k sentences in total. Note that the CommonGen test set is private and requires submission to the external leaderboard. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/common_gen') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/common_gen). #### website [link](https://inklab.usc.edu/CommonGen/) #### paper [Link](https://aclanthology.org/2020.findings-emnlp.165) #### authors Bill Yuchen Lin (USC), Wangchunshu Zhou (USC), Ming Shen (USC), Pei Zhou (USC), Chandra Bhagavatula (AllenAI), Yejin Choi (AllenAI + UW), Xiang Ren (USC) ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [link](https://inklab.usc.edu/CommonGen/) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Link](https://github.com/INK-USC/CommonGen) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [Link](https://aclanthology.org/2020.findings-emnlp.165) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{lin-etal-2020-commongen, title = "{C}ommon{G}en: A Constrained Text Generation Challenge for Generative Commonsense Reasoning", author = "Lin, Bill Yuchen and Zhou, Wangchunshu and Shen, Ming and Zhou, Pei and Bhagavatula, Chandra and Choi, Yejin and Ren, Xiang", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.165", pages = "1823--1840", } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Bill Yuchen Lin #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> yuchen.lin@usc.edu #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> yes #### Leaderboard Link <!-- info: Provide a link to the leaderboard. --> <!-- scope: periscope --> [Link](https://inklab.usc.edu/CommonGen/leaderboard.html) #### Leaderboard Details <!-- info: Briefly describe how the leaderboard evaluates models. --> <!-- scope: microscope --> The model outputs are evaluated against the crowdsourced references, and ranked by SPICE score. The leaderboard also reports BLEU-4 and CIDEr scores. ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Dialects <!-- info: What dialects are covered? Are there multiple dialects per language? --> <!-- scope: periscope --> No information is provided on regional restrictions and we thus assume that the covered dialects are those spoken by raters on Mechanical Turk. #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English` #### Whose Language? <!-- info: Whose language is in the dataset? --> <!-- scope: periscope --> The concepts were extracted from multiple English image captioning datasets and the data was collected via Amazon Mechanical Turk. No information on regional restrictions is provided. #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> mit: MIT License #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Reasoning #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> The speaker is required to produce a *coherent* sentence which mentions all of the source concepts, and which describes a *likely* situation that could be captured in a picture or video. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic`, `independent` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> The dataset was curated by a joint team of researchers from the University of Southern California and Allen Institute for Artificial Intelligence. #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Bill Yuchen Lin (USC), Wangchunshu Zhou (USC), Ming Shen (USC), Pei Zhou (USC), Chandra Bhagavatula (AllenAI), Yejin Choi (AllenAI + UW), Xiang Ren (USC) #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> The research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), the DARPA MCS program, and NSF SMA 18-29268. #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Yacine Jernite created the initial data card. It was later extended by Simon Mille. Sebastian Gehrmann migrated it to the GEMv2 format. ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> A data instance has the following fields: - `concepts`: a `list` of `string` values denoting the concept the system should write about. Has 3 to 5 items, constitutes the `input` of the task. - `target`: a sentence `string` mentioning all of the above mentioned `concepts`. Constitutes the desired `output` of the task. #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` [ { "concepts": ['ski', 'mountain', 'skier'], "target": 'Skier skis down the mountain', }, { "concepts": ['ski', 'mountain', 'skier'], "target": 'Three skiers are skiing on a snowy mountain.', }, ] ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> Each example in the dataset consists of a set of 3 to 5 concepts denoted by a single noun, verb, or adjective (the input), and a sentence using these concepts (the output). The dataset provides several such sentences for each such concept. | | Train | Dev | Test | |---------------------------|--------|-------|-------| | **Total concept-sets** | 32,651 | 993 | 1,497 | | **Total sentences** | 67,389 | 4,018 | 6,042 | |**Average sentence length**| 10.54 | 11.55 | 13.34 | #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> The dev and test set were created by sampling sets of concepts of size 4 or 5 (and as many of size 3 for the dev set) present in the source captioning datasets and having crowd-workers write reference sentences using these concepts. Conversely, the training set has more concept sets of size 3 than of size 4 and 5, and uses the original captions from the source datasets as references. The authors also ensured that the training, dev and test set have different combinations of unique concepts to ensure compositionality (details in [Table 1](https://arxiv.org/pdf/1911.03705v3.pdf)). ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> CommonGen is a medium sized corpus with a unique reasoning challenge and interesting evaluation possibilities. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> no #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> Commonsense reasoning ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> yes #### GEM Modifications <!-- info: What changes have been made to he original dataset? --> <!-- scope: periscope --> `other` #### Modification Details <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification --> <!-- scope: microscope --> 4 challenge sets for CommenGen were added to the GEM evaluation suite. #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> yes #### Split Information <!-- info: Describe how the new splits were created --> <!-- scope: periscope --> 1. Data Shift We created subsets of the training and development sets of ~500 randomly selected inputs each. 2. Transformations We applied input scrambling on a subset of 500 randomly selected test instances; the order of the concepts was randomly reassigned. 3. Subpopulations We created a subpopulation based on input length, taking into account the number of concepts the input test structures. By comparing inputs of different lengths, we can see to what extent systems are able to handle different input sizes | Concept number | Frequency English | |----------------|-------------------| | 4 | 747 | | 5 | 750 | #### Split Motivation <!-- info: What aspects of the model's generation capacities were the splits created to test? --> <!-- scope: periscope --> Generalization and Robustness ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> - Two variants of [BART](https://arxiv.org/abs/1910.13461), [Knowledge Graph augemnted-BART](https://arxiv.org/abs/2009.12677) and [Enhanced Knowledge Injection Model for Commonsense Generation](https://arxiv.org/abs/2012.00366), hold the top two spots on the leaderboard, followed by a fine-tuned [T5 model](https://arxiv.org/abs/1910.10683). - The following script shows how to download and load the data, fine-tune, and evaluate a model using the ROUGE, BLEU, and METEOR metrics: [GEM sample script](https://github.com/GEM-benchmark/GEM-baseline-models/blob/main/examples/GEM-common_gen.ipynb). ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Commonsense Reasoning #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `Other: Other Metrics`, `BLEU`, `ROUGE`, `METEOR` #### Other Metrics <!-- info: Definitions of other metrics --> <!-- scope: periscope --> - SPICE: An evaluation metric for image captioning that is defined over scene graphs - CIDEr: An n-gram overlap metric based on cosine similarity between the TF-IDF weighted ngram counts #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> The main metrics are captioning metrics since the original concept lists were extracted from captioning datasets. A human subject study with five graduate students was conducted and they were asked to rank the "commonsense plausibility" of two models at a time. #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes #### Other Evaluation Approaches <!-- info: What evaluation approaches have others used? --> <!-- scope: periscope --> The currently best performing model KFCNet (https://aclanthology.org/2021.findings-emnlp.249/) uses the same automatic evaluation but does not conduct any human evaluation. #### Relevant Previous Results <!-- info: What are the most relevant previous results for this task/dataset? --> <!-- scope: microscope --> The most relevant results can be seen on the [leaderboard](https://inklab.usc.edu/CommonGen/leaderboard.html) ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> The dataset creators selected sets of concepts that appeared in image and video captions (as identified by a POS tagger) to ensure that a likely real-world scenario including the set could be imagined and constructed. Section 3.1 of the [paper](https://arxiv.org/pdf/1911.03705v3.pdf) describes a sampling scheme which encourages diversity of sets while selecting common concepts. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> The speaker is required to produce a *coherent* sentence which mentions all of the source concepts, and which describes a *likely* situation that could be captured in a picture or video. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> yes #### Source Details <!-- info: List the sources (one per line) --> <!-- scope: periscope --> - [Flickr30k](https://www.mitpressjournals.org/doi/abs/10.1162/tacl_a_00166) - [MSCOCO](https://link.springer.com/chapter/10.1007/978-3-319-10602-1_48) - [Conceptual Captions](https://www.aclweb.org/anthology/P18-1238/) - Video captioning datasets: - [LSMDC](https://link.springer.com/article/10.1007/s11263-016-0987-1) - [ActivityNet](https://openaccess.thecvf.com/content_iccv_2017/html/Krishna_Dense-Captioning_Events_in_ICCV_2017_paper.html) - [VaTeX](https://openaccess.thecvf.com/content_ICCV_2019/html/Wang_VaTeX_A_Large-Scale_High-Quality_Multilingual_Dataset_for_Video-and-Language_Research_ICCV_2019_paper.html) ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Crowdsourced` #### Where was it crowdsourced? <!-- info: If crowdsourced, where from? --> <!-- scope: periscope --> `Amazon Mechanical Turk` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> The training data consists of concept sets and captions for the source datasets. The concept sets are the sets of labels of the images or videos, selected with a heuristic to maximize diversity while ensuring that they represent likely scenarios. The dev and test set sentences were created by Amazon Mechanical Turk crowd workers. The workers were shown an example generation and a set of 4 or 5 concept names along with their part-of-speech and asked to write: 1. One sentence mentioning all of the concepts 2. A rationale explaining how the sentence connects the concept A screenshot of the interface is provided in Figure 7 of the [Appendix](https://arxiv.org/pdf/1911.03705v3.pdf). #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> Information was not provided. #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> validated by data curator #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> algorithmically #### Filter Criteria <!-- info: What were the selection criteria? --> <!-- scope: microscope --> During the data collection, workers who provided rationales that were too short, failed to have good coverage of the input in their sentences, or workers whose output had a high perplexity under a GPT-2 model were disqualified from the pool and replaced with newcomers. ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> none #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no #### Justification for Using the Data <!-- info: If not, what is the justification for reusing the data? --> <!-- scope: microscope --> The data was sourced from Mechanical Turk which means that raters were aware that their annotations may be publicly released for research purposes. ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> no PII #### Justification for no PII <!-- info: Provide a justification for selecting `no PII` above. --> <!-- scope: periscope --> The concepts are restricted to verbs, adjectives, and common nouns, and no personal information is given in the captions. ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> no ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> no #### Are the Language Producers Representative of the Language? <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? --> <!-- scope: periscope --> The dataset is created using data from image captioning systems and might inherit some of the social biases represented therein (see e.g. [Tang et al. 2020](https://arxiv.org/abs/2006.08315)). Another related concern is the exposure bias introduced by the initial selection of pictures and video, which are likely to over-represent situations that are common in the US at the expense of other parts of the world (Flickr, for example, is a US-based company founded in Canada). For more discussion of the potential impacts of exposure bias, see e.g. [The Social Impact of Natural Language Processing](https://www.aclweb.org/anthology/P16-2096.pdf). ## Considerations for Using the Data ### PII Risks and Liability #### Potential PII Risk <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. --> <!-- scope: microscope --> The concepts are restricted to verbs, adjectives, and common nouns, and no personal information is given in the captions. ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `open license - commercial use allowed` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `open license - commercial use allowed` ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> The dataset is in English, a language with an abundance of existing resources. The use of GPT-2 to validate development ant test sentences [might be cause for similar concern](https://www.aclweb.org/anthology/D19-1339.pdf), but we do note that the authors only use the model to discount very high perplexity sequences which is less likely to surface those biases. The language in the development and test set is crowdsourced, which means that it was written by workers whose main goal was speed. This is likely to impact the quality and variety of the targets. The population of crowdsource workers is also not identically distributed as the the base population of the locations the workers come from, which may lead to different representation of situations or underlying expectations of what these situations are. #### Unsuited Applications <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. --> <!-- scope: microscope --> Due to the overrepresentation of US-situations, the system may not work for users across the world. Moreover, only limited information on the dataset quality are provided and the system may fail as a result of unknown issues. #### Discouraged Use Cases <!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. --> <!-- scope: microscope --> Any system needs to be evaluated on a broader set of unseen concepts then provided in the dataset. Since the references for the test set are private, it is not known how well findings generalize beyond the collection methodology.
false
# Dataset Card for WikiMovies ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WikiMovies Homepage](https://research.fb.com/downloads/babi/) - **Repository:** - **Paper:** [Key-Value Memory Networks for Directly Reading Documents](https://arxiv.org/pdf/1606.03126.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The WikiMovies dataset consists of roughly 100k (templated) questions over 75k entitiesbased on questions with answers in the open movie database (OMDb). It is the QA part of the Movie Dialog dataset. ### Supported Tasks and Leaderboards - Question Answering ### Languages The text in the dataset is written in English. ## Dataset Structure ### Data Instances The raw data consists of question answer pairs separated by a tab. Here are 3 examples: ```buildoutcfg 1 what does Grégoire Colin appear in? Before the Rain 1 Joe Thomas appears in which movies? The Inbetweeners Movie, The Inbetweeners 2 1 what films did Michelle Trachtenberg star in? Inspector Gadget, Black Christmas, Ice Princess, Harriet the Spy, The Scribbler ``` It is unclear what the `1` is for at the beginning of each line, but it has been removed in the `Dataset` object. ### Data Fields Here is an example of the raw data ingested by `Datasets`: ```buildoutcfg { 'answer': 'Before the Rain', 'question': 'what does Grégoire Colin appear in?' } ``` `answer`: a string containing the answer to a corresponding question. `question`: a string containing the relevant question. ### Data Splits The data is split into train, test, and dev sets. The split sizes are as follows: | wiki-entities_qa_* | n examples| | ----- | ---- | | train.txt | 96185 | | dev.txt | 10000 | | test.txt | 9952 | ## Dataset Creation ### Curation Rationale WikiMovies was built with the following goals in mind: (i) machine learning techniques should have ample training examples for learning; and (ii) one can analyze easily the performance of different representations of knowledge and break down the results by question type. The datasetcan be downloaded fromhttp://fb.ai/babi ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{miller2016keyvalue, title={Key-Value Memory Networks for Directly Reading Documents}, author={Alexander Miller and Adam Fisch and Jesse Dodge and Amir-Hossein Karimi and Antoine Bordes and Jason Weston}, year={2016}, eprint={1606.03126}, archivePrefix={arXiv}, primaryClass={cs.CL} ``` ### Contributions Thanks to [@aclifton314](https://github.com/aclifton314) for adding this dataset.
false
# Dataset Card for cocktails_recipe ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains a list of cocktails and how to do them. ### Languages The language is english. ## Dataset Structure ### Data Fields - Title: name of the cocktail - Glass: type of glass to use - Garnish: garnish to use for the glass - Recipe: how to do the cocktail - Ingredients: ingredients required - Raw Ingredients: ingredients mapped to their raw ingredients to remove the brand ### Data Splits Currently, there is no splits. ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The dataset was created by scraping the Diffords cocktail website. ### Personal and Sensitive Information It should not contain any personal or sensitive information. ### Contributions Thanks to [@github-erwanlc](https://github.com/erwanlc) for adding this dataset.
false
# Dataset Description - **Project Page:** https://instruction-tuning-with-gpt-4.github.io - **Repo:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM - **Paper:** https://arxiv.org/abs/2304.03277 # Dataset Card for "alpaca-gpt4-data" All of the work is done by [this team](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM). # Usage and License Notices The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. # Chinese Dataset [Found here](https://huggingface.co/datasets/c-s-ale/alpaca-gpt4-data-zh) # Citation ``` @article{peng2023gpt4llm, title={Instruction Tuning with GPT-4}, author={Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao}, journal={arXiv preprint arXiv:2304.03277}, year={2023} } ```
true
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/paulafortuna/Portuguese-Hate-Speech-Dataset - **Repository:** https://github.com/paulafortuna/Portuguese-Hate-Speech-Dataset - **Paper:** https://www.aclweb.org/anthology/W19-3510/ - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Portuguese dataset for hate speech detection composed of 5,668 tweets with binary annotations (i.e. 'hate' vs. 'no-hate'). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
false
# Dataset Card for ilpost ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary IlPost dataset, containing news articles taken from IlPost. There are two features: - source: Input news article. - target: Summary of the article. ### Supported Tasks and Leaderboards - `abstractive-summarization`, `summarization` ### Languages The text in the dataset is in Italian ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228) ``` @Article{info13050228, AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo}, TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization}, JOURNAL = {Information}, VOLUME = {13}, YEAR = {2022}, NUMBER = {5}, ARTICLE-NUMBER = {228}, URL = {https://www.mdpi.com/2078-2489/13/5/228}, ISSN = {2078-2489}, ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.}, DOI = {10.3390/info13050228} } ```
false
# Dataset Card for Project Gutenber - Multilanguage eBooks A collection of non-english language eBooks (7907, about 75-80% of all the ES, DE, FR, NL, IT, PT, HU books available on the site) from the Project Gutenberg site with metadata removed. Originally colected for https://github.com/LAION-AI/Open-Assistant | LANG | EBOOKS | |----|----| | ES | 717 | | DE | 1735 | | FR | 2863 | | NL | 904 | | IT | 692 | | PT | 501 | | HU | 495 | The METADATA column contains catalogue meta information on each book as a serialized JSON: | key | original column | |----|----| | language | - | | text_id | Text# unique book identifier on Prject Gutenberg as *int* | | title | Title of the book as *string* | | issued | Issued date as *string* | | authors | Authors as *string*, comma separated sometimes with dates | | subjects | Subjects as *string*, various formats | | locc | LoCC code as *string* | | bookshelves | Bookshelves as *string*, optional | ## Source data **How was the data generated?** - A crawler (see Open-Assistant repository) downloaded the raw HTML code for each eBook based on **Text#** id in the Gutenberg catalogue (if available) - The metadata and the body of text are not clearly separated so an additional parser attempts to split them, then remove transcriber's notes and e-book related information from the body of text (text clearly marked as copyrighted or malformed was skipped and not collected) - The body of cleaned TEXT as well as the catalogue METADATA is then saved as a parquet file, with all columns being strings **Copyright notice:** - Some of the books are copyrighted! The crawler ignored all books with an english copyright header by utilizing a regex expression, but make sure to check out the metadata for each book manually to ensure they are okay to use in your country! More information on copyright: https://www.gutenberg.org/help/copyright.html and https://www.gutenberg.org/policy/permission.html - Project Gutenberg has the following requests when using books without metadata: _Books obtianed from the Project Gutenberg site should have the following legal note next to them: "This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost" no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook."_
false
# KPWR-NER ## Description KPWR-NER is a part the Polish Corpus of Wrocław University of Technology (*Korpus Języka Polskiego Politechniki Wrocławskiej*). Its objective is named entity recognition for fine-grained categories of entities. It is the ‘n82’ version of the KPWr, which means that number of classes is restricted to 82 (originally 120). During corpus creation, texts were annotated by humans from various sources, covering many domains and genres. ## Tasks (input, output and metrics) Named entity recognition (NER) - tagging entities in text with their corresponding type. **Input** ('*tokens'* column): sequence of tokens **Output** ('*ner'* column): sequence of predicted tokens’ classes in BIO notation (82 possible classes, described in detail in the annotation guidelines) **Measurements**: F1-score (seqeval) **Example**: Input: `[‘Roboty’, ‘mają’, ‘kilkanaście’, ‘lat’, ‘i’, ‘pochodzą’, ‘z’, ‘USA’, ‘,’, ‘Wysokie’, ‘napięcie’, ‘jest’, ‘dużo’, ‘młodsze’, ‘,’, ‘powstało’, ‘w’, ‘Niemczech’, ‘.’]` Input (translated by DeepL): `Robots are more than a dozen years old and come from the US, High Voltage is much younger, having been developed in Germany.` Output: `[‘B-nam_pro_title’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘B-nam_loc_gpe_country’, ‘O’, ‘B-nam_pro_title’, ‘I-nam_pro_title’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘B-nam_loc_gpe_country’, ‘O’]` ## Data splits | Subset | Cardinality (sentences) | |--------|------------------------:| | train | 13959 | | dev | 0 | | test | 4323 | ## Class distribution (without "O" and "I-*") | Class | train | validation | test | |:----------------------------|--------:|-------------:|----------:| | B-nam_liv_person | 0.21910 | - | 0.21422 | | B-nam_loc_gpe_city | 0.10101 | - | 0.09865 | | B-nam_loc_gpe_country | 0.07467 | - | 0.08059 | | B-nam_org_institution | 0.05893 | - | 0.06005 | | B-nam_org_organization | 0.04448 | - | 0.05553 | | B-nam_org_group_team | 0.03492 | - | 0.03363 | | B-nam_adj_country | 0.03410 | - | 0.03747 | | B-nam_org_company | 0.02439 | - | 0.01716 | | B-nam_pro_media_periodic | 0.02250 | - | 0.01896 | | B-nam_fac_road | 0.01995 | - | 0.02144 | | B-nam_liv_god | 0.01934 | - | 0.00790 | | B-nam_org_nation | 0.01739 | - | 0.01828 | | B-nam_oth_tech | 0.01724 | - | 0.01377 | | B-nam_pro_media_web | 0.01709 | - | 0.00903 | | B-nam_fac_goe | 0.01596 | - | 0.01445 | | B-nam_eve_human | 0.01573 | - | 0.01761 | | B-nam_pro_title | 0.01558 | - | 0.00790 | | B-nam_pro_brand | 0.01543 | - | 0.01038 | | B-nam_org_political_party | 0.01264 | - | 0.01309 | | B-nam_loc_gpe_admin1 | 0.01219 | - | 0.01445 | | B-nam_eve_human_sport | 0.01174 | - | 0.01242 | | B-nam_pro_software | 0.01091 | - | 0.02190 | | B-nam_adj | 0.00963 | - | 0.01174 | | B-nam_loc_gpe_admin3 | 0.00888 | - | 0.01061 | | B-nam_pro_model_car | 0.00873 | - | 0.00587 | | B-nam_loc_hydronym_river | 0.00843 | - | 0.01151 | | B-nam_oth | 0.00775 | - | 0.00497 | | B-nam_pro_title_document | 0.00738 | - | 0.01986 | | B-nam_loc_astronomical | 0.00730 | - | - | | B-nam_oth_currency | 0.00723 | - | 0.01151 | | B-nam_adj_city | 0.00670 | - | 0.00948 | | B-nam_org_group_band | 0.00587 | - | 0.00429 | | B-nam_loc_gpe_admin2 | 0.00565 | - | 0.00813 | | B-nam_loc_gpe_district | 0.00504 | - | 0.00406 | | B-nam_loc_land_continent | 0.00459 | - | 0.00722 | | B-nam_loc_country_region | 0.00459 | - | 0.00090 | | B-nam_loc_land_mountain | 0.00414 | - | 0.00203 | | B-nam_pro_title_book | 0.00384 | - | 0.00248 | | B-nam_loc_historical_region | 0.00376 | - | 0.00497 | | B-nam_loc | 0.00361 | - | 0.00090 | | B-nam_eve | 0.00361 | - | 0.00181 | | B-nam_org_group | 0.00331 | - | 0.00406 | | B-nam_loc_land_island | 0.00331 | - | 0.00248 | | B-nam_pro_media_tv | 0.00316 | - | 0.00158 | | B-nam_liv_habitant | 0.00316 | - | 0.00158 | | B-nam_eve_human_cultural | 0.00316 | - | 0.00497 | | B-nam_pro_title_tv | 0.00309 | - | 0.00542 | | B-nam_oth_license | 0.00286 | - | 0.00248 | | B-nam_num_house | 0.00256 | - | 0.00248 | | B-nam_pro_title_treaty | 0.00248 | - | 0.00045 | | B-nam_fac_system | 0.00248 | - | 0.00587 | | B-nam_loc_gpe_subdivision | 0.00241 | - | 0.00587 | | B-nam_loc_land_region | 0.00226 | - | 0.00248 | | B-nam_pro_title_album | 0.00218 | - | 0.00158 | | B-nam_adj_person | 0.00203 | - | 0.00406 | | B-nam_fac_square | 0.00196 | - | 0.00135 | | B-nam_pro_award | 0.00188 | - | 0.00519 | | B-nam_eve_human_holiday | 0.00188 | - | 0.00203 | | B-nam_pro_title_song | 0.00166 | - | 0.00158 | | B-nam_pro_media_radio | 0.00151 | - | 0.00068 | | B-nam_pro_vehicle | 0.00151 | - | 0.00090 | | B-nam_oth_position | 0.00143 | - | 0.00226 | | B-nam_liv_animal | 0.00143 | - | 0.00248 | | B-nam_pro | 0.00135 | - | 0.00045 | | B-nam_oth_www | 0.00120 | - | 0.00451 | | B-nam_num_phone | 0.00120 | - | 0.00045 | | B-nam_pro_title_article | 0.00113 | - | - | | B-nam_oth_data_format | 0.00113 | - | 0.00226 | | B-nam_fac_bridge | 0.00105 | - | 0.00090 | | B-nam_liv_character | 0.00098 | - | - | | B-nam_pro_software_game | 0.00090 | - | 0.00068 | | B-nam_loc_hydronym_lake | 0.00090 | - | 0.00045 | | B-nam_loc_gpe_conurbation | 0.00090 | - | - | | B-nam_pro_media | 0.00083 | - | 0.00181 | | B-nam_loc_land | 0.00075 | - | 0.00045 | | B-nam_loc_land_peak | 0.00075 | - | - | | B-nam_fac_park | 0.00068 | - | 0.00226 | | B-nam_org_organization_sub | 0.00060 | - | 0.00068 | | B-nam_loc_hydronym | 0.00060 | - | 0.00023 | | B-nam_loc_hydronym_sea | 0.00045 | - | 0.00068 | | B-nam_loc_hydronym_ocean | 0.00045 | - | 0.00023 | | B-nam_fac_goe_stop | 0.00038 | - | 0.00090 | ## Citation ``` @inproceedings{broda-etal-2012-kpwr, title = "{KPW}r: Towards a Free Corpus of {P}olish", author = "Broda, Bartosz and Marci{\'n}czuk, Micha{\l} and Maziarz, Marek and Radziszewski, Adam and Wardy{\'n}ski, Adam", booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)", month = may, year = "2012", address = "Istanbul, Turkey", publisher = "European Language Resources Association (ELRA)", url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/965_Paper.pdf", pages = "3218--3222", abstract = "This paper presents our efforts aimed at collecting and annotating a free Polish corpus. The corpus will serve for us as training and testing material for experiments with Machine Learning algorithms. As others may also benefit from the resource, we are going to release it under a Creative Commons licence, which is hoped to remove unnecessary usage restrictions, but also to facilitate reproduction of our experimental results. The corpus is being annotated with various types of linguistic entities: chunks and named entities, selected syntactic and semantic relations, word senses and anaphora. We report on the current state of the project as well as our ultimate goals.", } ``` ## License ``` Creative Commons Attribution 3.0 Unported Licence ``` ## Links [HuggingFace](https://huggingface.co/datasets/clarin-pl/kpwr-ner) [Source](https://clarin-pl.eu/index.php/kpwr-en/) [Paper](https://aclanthology.org/L12-1574/) [KPWr annotation guidelines](http://www.nlp.pwr.wroc.pl/narzedzia-i-zasoby/zasoby/kpwr-lemma/16-narzedzia-zasoby/79-wytyczne) [KPWr annotation guidelines - named entities](https://clarin-pl.eu/dspace/handle/11321/294) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("clarin-pl/kpwr-ner") pprint(dataset['train'][0]) # {'lemmas': ['roborally', 'czy', 'wysoki', 'napięcie', '?'], # 'ner': [73, 160, 73, 151, 160], # 'orth': ['subst:sg:nom:n', # 'qub', # 'adj:sg:nom:n:pos', # 'subst:sg:nom:n', # 'interp'], # 'tokens': ['RoboRally', 'czy', 'Wysokie', 'napięcie', '?']} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("clarin-pl/kpwr-ner") references = dataset["test"]["ner"] # generate random predictions predictions = [ [ random.randrange(dataset["train"].features["ner"].feature.num_classes) for _ in range(len(labels)) ] for labels in references ] # transform to original names of labels references_named = [ [dataset["train"].features["ner"].feature.names[label] for label in labels] for labels in references ] predictions_named = [ [dataset["train"].features["ner"].feature.names[label] for label in labels] for labels in predictions ] # utilise seqeval to evaluate seqeval = load_metric("seqeval") seqeval_score = seqeval.compute( predictions=predictions_named, references=references_named, scheme="IOB2" ) pprint(seqeval_score, depth=1) # {'nam_adj': {...}, # 'nam_adj_city': {...}, # 'nam_adj_country': {...}, # 'nam_adj_person': {...}, # 'nam_eve': {...}, # 'nam_eve_human': {...}, # 'nam_eve_human_cultural': {...}, # 'nam_eve_human_holiday': {...}, # 'nam_eve_human_sport': {...}, # 'nam_fac_bridge': {...}, # 'nam_fac_goe': {...}, # 'nam_fac_goe_stop': {...}, # 'nam_fac_park': {...}, # 'nam_fac_road': {...}, # 'nam_fac_square': {...}, # 'nam_fac_system': {...}, # 'nam_liv_animal': {...}, # 'nam_liv_character': {...}, # 'nam_liv_god': {...}, # 'nam_liv_habitant': {...}, # 'nam_liv_person': {...}, # 'nam_loc': {...}, # 'nam_loc_astronomical': {...}, # 'nam_loc_country_region': {...}, # 'nam_loc_gpe_admin1': {...}, # 'nam_loc_gpe_admin2': {...}, # 'nam_loc_gpe_admin3': {...}, # 'nam_loc_gpe_city': {...}, # 'nam_loc_gpe_conurbation': {...}, # 'nam_loc_gpe_country': {...}, # 'nam_loc_gpe_district': {...}, # 'nam_loc_gpe_subdivision': {...}, # 'nam_loc_historical_region': {...}, # 'nam_loc_hydronym': {...}, # 'nam_loc_hydronym_lake': {...}, # 'nam_loc_hydronym_ocean': {...}, # 'nam_loc_hydronym_river': {...}, # 'nam_loc_hydronym_sea': {...}, # 'nam_loc_land': {...}, # 'nam_loc_land_continent': {...}, # 'nam_loc_land_island': {...}, # 'nam_loc_land_mountain': {...}, # 'nam_loc_land_peak': {...}, # 'nam_loc_land_region': {...}, # 'nam_num_house': {...}, # 'nam_num_phone': {...}, # 'nam_org_company': {...}, # 'nam_org_group': {...}, # 'nam_org_group_band': {...}, # 'nam_org_group_team': {...}, # 'nam_org_institution': {...}, # 'nam_org_nation': {...}, # 'nam_org_organization': {...}, # 'nam_org_organization_sub': {...}, # 'nam_org_political_party': {...}, # 'nam_oth': {...}, # 'nam_oth_currency': {...}, # 'nam_oth_data_format': {...}, # 'nam_oth_license': {...}, # 'nam_oth_position': {...}, # 'nam_oth_tech': {...}, # 'nam_oth_www': {...}, # 'nam_pro': {...}, # 'nam_pro_award': {...}, # 'nam_pro_brand': {...}, # 'nam_pro_media': {...}, # 'nam_pro_media_periodic': {...}, # 'nam_pro_media_radio': {...}, # 'nam_pro_media_tv': {...}, # 'nam_pro_media_web': {...}, # 'nam_pro_model_car': {...}, # 'nam_pro_software': {...}, # 'nam_pro_software_game': {...}, # 'nam_pro_title': {...}, # 'nam_pro_title_album': {...}, # 'nam_pro_title_article': {...}, # 'nam_pro_title_book': {...}, # 'nam_pro_title_document': {...}, # 'nam_pro_title_song': {...}, # 'nam_pro_title_treaty': {...}, # 'nam_pro_title_tv': {...}, # 'nam_pro_vehicle': {...}, # 'overall_accuracy': 0.006156203762418094, # 'overall_f1': 0.0009844258777797407, # 'overall_precision': 0.0005213624939842789, # 'overall_recall': 0.008803611738148984} ```
false
# Multiturn Chat 0.8M ## 内容 包含约80万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的*用户*与*助手*的多轮对话。 注意:此数据集是由ChatGPT产生的,未经过严格校验,内容可能包含错误。使用过程中请注意这一点。 **instruction**中包含多轮对话的上文内容,以*Human:*和*Assistant:*区分,**output**中包含当前*助手*角色的回答。 ## 样例 ``` { "instruction": "Human:请根据题材「健身房」,写一篇800字的文章。\nAssistant:好的,以下是由我生成的800字「健身房」题材文章:\n在现代社会,人们越来越珍视健康和身材。健身房就是一个人们可以锻炼身体的场所,也是一个社交的场所。在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\n对于喜欢力量训练的人来说,选择健身房的一个原因是因为那里的器械更加完备,可以满足训练的需求。当然,不仅仅是体力上的训练,健身房同样也是锻炼毅力和耐力的地方。因为每一次锻炼的过程都需要坚持和勇气。\n而对于那些想要减肥的人来说,健身房也是非常重要的。有氧运动比如慢跑,跳操,游泳等都是非常有效的减肥方式。而且因为健身房的压力比较大,所以很容易使人们保持锻炼的热情和动力。\n一些人则会选择瑜伽馆作为健身的场所,因为它们通常比较安静,能够让人们放松。瑜伽有纾解压力,提高柔韧性,强化肌肉的好处,而且更适合那些不喜欢高强度训练的人。\n总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。无论您是力量训练爱好者,减肥追求者还是想要放松的人,您都可以在这里找到适合自己的锻炼方式。\nHuman:请从上文摘取三个句子进行打分。\nAssistant:", "input": "", "output": "好的。以下是我摘取的三个句子:\n1. \"在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\"\n2. \"而对于那些想要减肥的人来说,健身房也是非常重要的。\"\n3. \"总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。\"\n网页端评分界面还未准备好,不能在这里为你进行打分,请见谅。" } ``` ### 字段 ``` instruction: 指令 input: 输入(本数据集均为空) output: 输出 ``` ## 局限性和使用限制 我们要求开发者仅将我们开源的代码、数据、模型及后续衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 由于数据是由*ChatGPT*生成的,未经严格验证,在事实性和其他方面还存在一些不足。因此,在使用此数据集时,请务必注意甄别。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目的开发者不承担任何责任。 *** # Multiturn Chat 0.8M ## Contents Includes approx. 0.8M Chinese multiturn dialogs between *human* and *assistant*. Note: this subset was generated by *ChatGPT* and was not strictly verified. The dialog contents might contain errors. Please take this in mind when using this subset. **instruction** contains history dialog context, distinguishable by *Human:* and *Assistant:*, **output** contains the current reply by *assistant*. ## Sample ``` { "instruction": "Human:请根据题材「健身房」,写一篇800字的文章。\nAssistant:好的,以下是由我生成的800字「健身房」题材文章:\n在现代社会,人们越来越珍视健康和身材。健身房就是一个人们可以锻炼身体的场所,也是一个社交的场所。在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\n对于喜欢力量训练的人来说,选择健身房的一个原因是因为那里的器械更加完备,可以满足训练的需求。当然,不仅仅是体力上的训练,健身房同样也是锻炼毅力和耐力的地方。因为每一次锻炼的过程都需要坚持和勇气。\n而对于那些想要减肥的人来说,健身房也是非常重要的。有氧运动比如慢跑,跳操,游泳等都是非常有效的减肥方式。而且因为健身房的压力比较大,所以很容易使人们保持锻炼的热情和动力。\n一些人则会选择瑜伽馆作为健身的场所,因为它们通常比较安静,能够让人们放松。瑜伽有纾解压力,提高柔韧性,强化肌肉的好处,而且更适合那些不喜欢高强度训练的人。\n总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。无论您是力量训练爱好者,减肥追求者还是想要放松的人,您都可以在这里找到适合自己的锻炼方式。\nHuman:请从上文摘取三个句子进行打分。\nAssistant:", "input": "", "output": "好的。以下是我摘取的三个句子:\n1. \"在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\"\n2. \"而对于那些想要减肥的人来说,健身房也是非常重要的。\"\n3. \"总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。\"\n网页端评分界面还未准备好,不能在这里为你进行打分,请见谅。" } ``` ### Schema ``` instruction: 指令 input: 输入(本数据集均为空) output: 输出 ``` ## Limitation and Usage Limits We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed. Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed. This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project.
false
# Dataset Card for "tner/mit_restaurant" ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Dataset:** MIT restaurant - **Domain:** Restaurant - **Number of Entity:** 8 ### Dataset Summary MIT Restaurant NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. - Entity Types: `Rating`, `Amenity`, `Location`, `Restaurant_Name`, `Price`, `Hours`, `Dish`, `Cuisine`. ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tags': [0, 0, 0, 0, 0, 0, 0, 0, 5, 3, 4, 0], 'tokens': ['can', 'you', 'find', 'the', 'phone', 'number', 'for', 'the', 'closest', 'family', 'style', 'restaurant'] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/mit_restaurant/raw/main/dataset/label.json). ```python { "O": 0, "B-Rating": 1, "I-Rating": 2, "B-Amenity": 3, "I-Amenity": 4, "B-Location": 5, "I-Location": 6, "B-Restaurant_Name": 7, "I-Restaurant_Name": 8, "B-Price": 9, "B-Hours": 10, "I-Hours": 11, "B-Dish": 12, "I-Dish": 13, "B-Cuisine": 14, "I-Price": 15, "I-Cuisine": 16 } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |mit_restaurant |6900 | 760| 1521|
false
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC # SQAC (Spanish Question-Answering Corpus): An extractive QA dataset for the Spanish language ## BibTeX citation ```bibtex @article{DBLP:journals/corr/abs-2107-07253, author = {Asier Guti{\'{e}}rrez{-}Fandi{\~{n}}o and Jordi Armengol{-}Estap{\'{e}} and Marc P{\`{a}}mies and Joan Llop{-}Palao and Joaqu{\'{\i}}n Silveira{-}Ocampo and Casimiro Pio Carrino and Aitor Gonzalez{-}Agirre and Carme Armentano{-}Oller and Carlos Rodr{\'{\i}}guez Penagos and Marta Villegas}, title = {Spanish Language Models}, journal = {CoRR}, volume = {abs/2107.07253}, year = {2021}, url = {https://arxiv.org/abs/2107.07253}, archivePrefix = {arXiv}, eprint = {2107.07253}, timestamp = {Wed, 21 Jul 2021 15:55:35 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-07253.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` See the pre-print version of our paper for further details: https://arxiv.org/abs/2107.07253 <!-- ## Digital Object Identifier (DOI) and access to dataset files --> ## Introduction This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment. The sources of the contexts are: * Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode). * News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/). * Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode). This dataset can be used to build extractive-QA. ### Supported Tasks and Leaderboards Extractive-QA ### Languages ES - Spanish ### Directory structure * README.md * dev.json * test.json * train.json * sqac.py ## Dataset Structure ### Data Instances JSON files ### Data Fields Follows (Rajpurkar, Pranav et al., 2016) for squad v1 datasets. (see below for full reference). We added a field "source" with the source of the context. ### Example <pre> { "data": [ { "paragraphs": [ { "context": "Al cogote, y fumando como una cafetera. Ah!, no era él, éramos todos nosotros. Luego llegó Billie Holiday. Bajo el epígrafe Arte, la noche temática, pasaron la vida de la única cantante del universo que no es su voz, sino su alma lo que se escucha cuando interpreta. Gata golpeada por el mundo, pateada, violada, enganchada a todos los paraísos artificiales del planeta, jamás encontró el Edén. El Edén lo encontramos nosotros cuando, al concluir la sesión de la tele, pusimos en la doméstica cadena de sonido el mítico Last Recording, su última grabación (marzo de 1959), con la orquesta de Ray Ellis y el piano de Hank Jones. Se estaba muriendo Lady Day, y no obstante, mientras moría, su alma cantaba, Baby, won't you please come home. O sea, niño, criatura, amor, vuelve, a casa por favor.", "qas": [ { "question": "¿Quién se incorporó a la reunión más adelante?", "id": "c5429572-64b8-4c5d-9553-826f867b07be", "answers": [ { "answer_start": 91, "text": "Billie Holiday" } ] }, ... ] } ], "title": "P_129_20010702_&_P_154_20010102_&_P_108_20000301_c_&_P_108_20000601_d", "source": "ancora" }, ... ] } </pre> ### Data Splits - train - development - test ## Content analysis ### Number of articles, paragraphs and questions * Number of articles: 3,834 * Number of contexts: 6,247 * Number of questions: 18,817 * Questions/context: 3.01 * Number of sentences: 48,026 * Sentences/context: 7.70 ### Number of tokens * Total tokens in context: 1,561,616 * Tokens/context 250.30 * Total tokens in questions: 203,235 * Tokens in questions/questions: 10.80 * Tokens in questions/tokens in context: 0.13 * Total tokens in answers: 90,307 * Tokens in answers/answers: 4.80 * Tokens in answers/tokens in context: 0.06 ### Lexical variation 46.38 of the words in the Question can be found in the Context. ### Question type | Question | Count | % | |----------|-------:|---:| | qué | 6,381 | 33.91 % | | quién/es | 2,952 | 15.69 % | | cuál/es | 2,034 | 10.81 % | | cómo | 1,949 | 10.36 % | | dónde | 1,856 | 9.86 % | | cuándo | 1,639 | 8.71 % | | cuánto | 1,311 | 6.97 % | | cuántos | 495 |2.63 % | | adónde | 100 | 0.53 % | | cuánta | 49 | 0.26 % | | no question mark | 43 | 0.23 % | | cuántas | 19 | 0.10 % | ## Dataset Creation ### Methodology 6,247 contexts were randomly chosen from the three corpus described below. We commisioned the creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250). In total, 18,817 pairs of a question and an extracted fragment that contains the answer were created. ### Curation Rationale For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. We also created another QA dataset with Wikipedia to ensure thematic and stylistic variety. ### Source Data - Spanish Wikipedia: https://es.wikipedia.org - Spanish Wikinews: https://es.wikinews.org/ - AnCora corpus: http://clic.ub.edu/corpus/en #### Initial Data Collection and Normalization The source data are scraped articles from the Spanish Wikipedia site, Wikinews site and from AnCora corpus. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250). #### Who are the annotators? Native language speakers. ### Dataset Curators Carlos Rodríguez and Carme Armentano, from BSC-CNS. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Contact Carlos Rodríguez-Penagos or Carme Armentano-Oller (bsc-temu@bsc.es) ## Funding This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## License <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/"><img alt="Attribution-ShareAlike 4.0 International License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
false
# SQAC (Spanish Question-Answering Corpus) ## Dataset Description SQAC is an extractive QA dataset for the Spanish language. - **Paper:** [MarIA: Spanish Language Models](https://upcommons.upc.edu/bitstream/handle/2117/367156/6405-5863-1-PB%20%281%29.pdf?sequence=1) - **Point of Contact:** carlos.rodriguez1@bsc.es - **Leaderboard:** [EvalEs] (https://plantl-gob-es.github.io/spanish-benchmark/) ### Dataset Summary Contains 6,247 contexts and 18,817 questions with their respective answers, 1 to 5 for each fragment. The sources of the contexts are: * Encyclopedic articles from the [Spanish Wikipedia](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode). * News articles from [Wikinews](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/). * Newswire and literature text from the [AnCora corpus](http://clic.ub.edu/corpus/en), used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode). ### Supported Tasks Extractive-QA ### Languages - Spanish (es) ### Directory Structure - README.md - SQAC.py - dev.json - test.json - train.json ## Dataset Structure ### Data Instances <pre> { 'id': '6cf3dcd6-b5a3-4516-8f9e-c5c1c6b66628', 'title': 'Historia de Japón', 'context': 'La historia de Japón (日本の歴史 o 日本史, Nihon no rekishi / Nihonshi?) es la sucesión de hechos acontecidos dentro del archipiélago japonés. Algunos de estos hechos aparecen aislados e influenciados por la naturaleza geográfica de Japón como nación insular, en tanto que otra serie de hechos, obedece a influencias foráneas como en el caso del Imperio chino, el cual definió su idioma, su escritura y, también, su cultura política. Asimismo, otra de las influencias foráneas fue la de origen occidental, lo que convirtió al país en una nación industrial, ejerciendo con ello una esfera de influencia y una expansión territorial sobre el área del Pacífico. No obstante, dicho expansionismo se detuvo tras la Segunda Guerra Mundial y el país se posicionó en un esquema de nación industrial con vínculos a su tradición cultural.', 'question': '¿Qué influencia convirtió Japón en una nación industrial?', 'answers': { 'text': ['la de origen occidental'], 'answer_start': [473] } } </pre> ### Data Fields <pre> { id: str title: str context: str question: str answers: { answer_start: [int] text: [str] } } </pre> ### Data Splits | Split | Size | | ------------- | ------------- | | `train` | 15,036 | | `dev` | 1,864 | | `test` | 1.910 | ## Content analysis ### Number of articles, paragraphs and questions * Number of articles: 3,834 * Number of contexts: 6,247 * Number of questions: 18,817 * Number of sentences: 48,026 * Questions/Context ratio: 3.01 * Sentences/Context ratio: 7.70 ### Number of tokens * Total tokens in context: 1,561,616 * Average tokens/context: 250 * Total tokens in questions: 203,235 * Average tokens/question: 10.80 * Total tokens in answers: 90,307 * Average tokens/answer: 4.80 ### Lexical variation 46.38% of the words in the Question can be found in the Context. ### Question type | Question | Count | % | |----------|-------:|---:| | qué | 6,381 | 33.91 % | | quién/es | 2,952 | 15.69 % | | cuál/es | 2,034 | 10.81 % | | cómo | 1,949 | 10.36 % | | dónde | 1,856 | 9.86 % | | cuándo | 1,639 | 8.71 % | | cuánto | 1,311 | 6.97 % | | cuántos | 495 |2.63 % | | adónde | 100 | 0.53 % | | cuánta | 49 | 0.26 % | | no question mark | 43 | 0.23 % | | cuántas | 19 | 0.10 % | ## Dataset Creation ### Curation Rationale For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250). ### Source Data #### Initial Data Collection and Normalization The source data are scraped articles from Wikinews, the Spanish Wikipedia and the AnCora corpus. - [Spanish Wikipedia](https://es.wikipedia.org) - [Spanish Wikinews](https://es.wikinews.org/) - [AnCora corpus](http://clic.ub.edu/corpus/en) #### Who are the source language producers? Contributors to the aforementioned sites. ### Annotations #### Annotation process We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250). #### Who are the annotators? Native language speakers. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset This corpus contributes to the development of language models in Spanish. ### Discussion of Biases No postprocessing steps were applied to mitigate potential social biases. ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es). For further information, send an email to (plantl-gob-es@bsc.es). This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx). ### Licensing information This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License. Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Citation Information ``` @article{maria, author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas}, title = {MarIA: Spanish Language Models}, journal = {Procesamiento del Lenguaje Natural}, volume = {68}, number = {0}, year = {2022}, issn = {1989-7553}, url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405}, pages = {39--60} } ``` ### Contributions [N/A]
true
# Dataset Card for SofcMaterialsArticles ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [boschresearch/sofc-exp_textmining_resources](https://github.com/boschresearch/sofc-exp_textmining_resources) - **Repository:** [boschresearch/sofc-exp_textmining_resources](https://github.com/boschresearch/sofc-exp_textmining_resources) - **Paper:** [The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain](https://arxiv.org/abs/2006.03039) - **Leaderboard:** - **Point of Contact:** [Annemarie Friedrich](annemarie.friedrich@de.bosch.com) ### Dataset Summary > The SOFC-Exp corpus contains 45 scientific publications about solid oxide fuel cells (SOFCs), published between 2013 and 2019 as open-access articles all with a CC-BY license. The dataset was manually annotated by domain experts with the following information: > > * Mentions of relevant experiments have been marked using a graph structure corresponding to instances of an Experiment frame (similar to the ones used in FrameNet.) We assume that an Experiment frame is introduced to the discourse by mentions of words such as report, test or measure (also called the frame-evoking elements). The nodes corresponding to the respective tokens are the heads of the graphs representing the Experiment frame. > * The Experiment frame related to SOFC-Experiments defines a set of 16 possible participant slots. Participants are annotated as dependents of links between the frame-evoking element and the participant node. > * In addition, we provide coarse-grained entity/concept types for all frame participants, i.e, MATERIAL, VALUE or DEVICE. Note that this annotation has not been performed on the full texts but only on sentences containing information about relevant experiments, and a few sentences in addition. In the paper, we run experiments for both tasks only on the set of sentences marked as experiment-describing in the gold standard, which is admittedly a slightly simplified setting. Entity types are only partially annotated on other sentences. Slot filling could of course also be evaluated in a fully automatic setting with automatic experiment sentence detection as a first step. ### Supported Tasks and Leaderboards - `topic-classification`: The dataset can be used to train a model for topic-classification, to identify sentences that mention SOFC-related experiments. - `named-entity-recognition`: The dataset can be used to train a named entity recognition model to detect `MATERIAL`, `VALUE`, `DEVICE`, and `EXPERIMENT` entities. - `slot-filling`: The slot-filling task is approached as fine-grained entity-typing-in-context, assuming that each sentence represents a single experiment frame. Sequence tagging architectures are utilized for tagging the tokens of each experiment-describing sentence with the set of slot types. The paper experiments with BiLSTM architectures with `BERT`- and `SciBERT`- generated token embeddings, as well as with `BERT` and `SciBERT` directly for the modeling task. A simple CRF architecture is used as a baseline for sequence-tagging tasks. Implementations of the transformer-based architectures can be found in the `huggingface/transformers` library: [BERT](https://huggingface.co/bert-base-uncased), [SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased) ### Languages This corpus is in English. ## Dataset Structure ### Data Instances As each example is a full text of an academic paper, plus annotations, a json formatted example is space-prohibitive for this README. ### Data Fields - `text`: The full text of the paper - `sentence_offsets`: Start and end character offsets for each sentence in the text. - `begin_char_offset`: a `int64` feature. - `end_char_offset`: a `int64` feature. - `sentences`: A sequence of the sentences in the text (using `sentence_offsets`) - `sentence_labels`: Sequence of binary labels for whether a sentence contains information of interest. - `token_offsets`: Sequence of sequences containing start and end character offsets for each token in each sentence in the text. - `offsets`: a dictionary feature containing: - `begin_char_offset`: a `int64` feature. - `end_char_offset`: a `int64` feature. - `tokens`: Sequence of sequences containing the tokens for each sentence in the text. - `feature`: a `string` feature. - `entity_labels`: a dictionary feature containing: - `feature`: a classification label, with possible values including `B-DEVICE`, `B-EXPERIMENT`, `B-MATERIAL`, `B-VALUE`, `I-DEVICE`. - `slot_labels`: a dictionary feature containing: - `feature`: a classification label, with possible values including `B-anode_material`, `B-cathode_material`, `B-conductivity`, `B-current_density`, `B-degradation_rate`. - `links`: a dictionary feature containing: - `relation_label`: a classification label, with possible values including `coreference`, `experiment_variation`, `same_experiment`, `thickness`. - `start_span_id`: a `int64` feature. - `end_span_id`: a `int64` feature. - `slots`: a dictionary feature containing: - `frame_participant_label`: a classification label, with possible values including `anode_material`, `cathode_material`, `current_density`, `degradation_rate`, `device`. - `slot_id`: a `int64` feature. - `spans`: a dictionary feature containing: - `span_id`: a `int64` feature. - `entity_label`: a classification label, with possible values including ``, `DEVICE`, `MATERIAL`, `VALUE`. - `sentence_id`: a `int64` feature. - `experiment_mention_type`: a classification label, with possible values including ``, `current_exp`, `future_work`, `general_info`, `previous_work`. - `begin_char_offset`: a `int64` feature. - `end_char_offset`: a `int64` feature. - `experiments`: a dictionary feature containing: - `experiment_id`: a `int64` feature. - `span_id`: a `int64` feature. - `slots`: a dictionary feature containing: - `frame_participant_label`: a classification label, with possible values including `anode_material`, `cathode_material`, `current_density`, `degradation_rate`, `conductivity`. - `slot_id`: a `int64` feature. Very detailed information for each of the fields can be found in the [corpus file formats section](https://github.com/boschresearch/sofc-exp_textmining_resources#corpus-file-formats) of the associated dataset repo ### Data Splits This dataset consists of three splits: | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | Input Examples | 26 | 8 | 11 | The authors propose the experimental setting of using the training data in a 5-fold cross validation setting for development and tuning, and finally applying tte model(s) to the independent test set. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The corpus consists of 45 open-access scientific publications about SOFCs and related research, annotated by domain experts. ### Annotations #### Annotation process For manual annotation, the authors use the InCeption annotation tool (Klie et al., 2018). #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The manual annotations created for the SOFC-Exp corpus are licensed under a [Creative Commons Attribution 4.0 International License (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ``` @misc{friedrich2020sofcexp, title={The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain}, author={Annemarie Friedrich and Heike Adel and Federico Tomazic and Johannes Hingerl and Renou Benteau and Anika Maruscyk and Lukas Lange}, year={2020}, eprint={2006.03039}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset.
false
# Dataset Card for FQuAD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://fquad.illuin.tech/](https://fquad.illuin.tech/) - **Paper:** [FQuAD: French Question Answering Dataset](https://arxiv.org/abs/2002.06071) - **Point of Contact:** [https://www.illuin.tech/contact/](https://www.illuin.tech/contact/) - **Size of downloaded dataset files:** 3.29 MB - **Size of the generated dataset:** 6.94 MB - **Total amount of disk used:** 10.23 MB ### Dataset Summary FQuAD: French Question Answering Dataset We introduce FQuAD, a native French Question Answering Dataset. FQuAD contains 25,000+ question and answer pairs. Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%. Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles. Please, note this dataset is licensed for non-commercial purposes and users must agree to the following terms and conditions: 1. Use FQuAD only for internal research purposes. 2. Not make any copy except a safety one. 3. Not redistribute it (or part of it) in any way, even for free. 4. Not sell it or use it for any commercial purpose. Contact us for a possible commercial licence. 5. Mention the corpus origin and Illuin Technology in all publications about experiments using FQuAD. 6. Redistribute to Illuin Technology any improved or enriched version you could make of that corpus. Request manually download of the data from: https://fquad.illuin.tech/ ### Supported Tasks and Leaderboards - `closed-domain-qa`, `text-retrieval`: This dataset is intended to be used for `closed-domain-qa`, but can also be used for information retrieval tasks. ### Languages This dataset is exclusively in French, with context data from Wikipedia and questions from French university students (`fr`). ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 3.29 MB - **Size of the generated dataset:** 6.94 MB - **Total amount of disk used:** 10.23 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answers": { "answers_starts": [161, 46, 204], "texts": ["La Vierge aux rochers", "documents contemporains", "objets de spéculations"] }, "context": "\"Les deux tableaux sont certes décrits par des documents contemporains à leur création mais ceux-ci ne le font qu'indirectement ...", "questions": ["Que concerne principalement les documents ?", "Par quoi sont décrit les deux tableaux ?", "Quels types d'objets sont les deux tableaux aux yeux des chercheurs ?"] } ``` ### Data Fields The data fields are the same among all splits. #### default - `context`: a `string` feature. - `questions`: a `list` of `string` features. - `answers`: a dictionary feature containing: - `texts`: a `string` feature. - `answers_starts`: a `int32` feature. ### Data Splits The FQuAD dataset has 3 splits: _train_, _validation_, and _test_. The _test_ split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split. Dataset Split | Number of Articles in Split | Number of paragraphs in split | Number of questions in split --------------|------------------------------|--------------------------|------------------------- Train | 117 | 4921 | 20731 Validation | 768 | 51.0% | 3188 Test | 10 | 532 | 2189 ## Dataset Creation ### Curation Rationale The FQuAD dataset was created by Illuin technology. It was developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles. ### Source Data The text used for the contexts are from the curated list of French High-Quality Wikipedia [articles](https://fr.wikipedia.org/wiki/Cat%C3%A9gorie:Article_de_qualit%C3%A9). ### Annotations Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering. Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans. Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context. ### Personal and Sensitive Information No personal or sensitive information is included in this dataset. This has been manually verified by the dataset curators. ## Considerations for Using the Data Users should consider this dataset is sampled from Wikipedia data which might not be representative of all QA use cases. ### Social Impact of Dataset The social biases of this dataset have not yet been investigated. ### Discussion of Biases The social biases of this dataset have not yet been investigated, though articles have been selected by their quality and objectivity. ### Other Known Limitations The limitations of the FQuAD dataset have not yet been investigated. ## Additional Information ### Dataset Curators Illuin Technology: [https://fquad.illuin.tech/](https://fquad.illuin.tech/) ### Licensing Information The FQuAD dataset is licensed under the [CC BY-NC-SA 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0/fr/) license. It allows personal and academic research uses of the dataset, but not commercial uses. So concretely, the dataset cannot be used to train a model that is then put into production within a business or a company. For this type of commercial use, we invite FQuAD users to contact [the authors](https://www.illuin.tech/contact/) to discuss possible partnerships. ### Citation Information ``` @ARTICLE{2020arXiv200206071 author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé}, title = "{FQuAD: French Question Answering Dataset}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language}, year = "2020", month = "Feb", eid = {arXiv:2002.06071}, pages = {arXiv:2002.06071}, archivePrefix = {arXiv}, eprint = {2002.06071}, primaryClass = {cs.CL} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. Thanks to [@ManuelFay](https://github.com/manuelfay) for providing information on the dataset creation process.
true
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Jigsaw Comment Toxicity Classification Kaggle Competition](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments. This dataset consists of a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. ### Supported Tasks and Leaderboards The dataset support multi-label classification ### Languages The comments are in English ## Dataset Structure ### Data Instances A data point consists of a comment followed by multiple labels that can be associated with it. {'id': '02141412314', 'comment_text': 'Sample comment text', 'toxic': 0, 'severe_toxic': 0, 'obscene': 0, 'threat': 0, 'insult': 0, 'identity_hate': 1, } ### Data Fields - `id`: id of the comment - `comment_text`: the text of the comment - `toxic`: value of 0(non-toxic) or 1(toxic) classifying the comment - `severe_toxic`: value of 0(non-severe_toxic) or 1(severe_toxic) classifying the comment - `obscene`: value of 0(non-obscene) or 1(obscene) classifying the comment - `threat`: value of 0(non-threat) or 1(threat) classifying the comment - `insult`: value of 0(non-insult) or 1(insult) classifying the comment - `identity_hate`: value of 0(non-identity_hate) or 1(identity_hate) classifying the comment ### Data Splits The data is split into a training and testing set. ## Dataset Creation ### Curation Rationale The dataset was created to help in efforts to identify and curb instances of toxicity online. ### Source Data #### Initial Data Collection and Normalization The dataset is a collection of Wikipedia comments. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases If words that are associated with swearing, insults or profanity are present in a comment, it is likely that it will be classified as toxic, regardless of the tone or the intent of the author e.g. humorous/self-deprecating. This could present some biases towards already vulnerable minority groups. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The "Toxic Comment Classification" dataset is released under [CC0], with the underlying comment text being governed by Wikipedia\'s [CC-SA-3.0]. ### Citation Information No citation information. ### Contributions Thanks to [@Tigrex161](https://github.com/Tigrex161) for adding this dataset.
true
# Dataset Card for Machine Paraphrase Dataset (MPC) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/jpwahle/iconf22-paraphrase - **Paper:** https://link.springer.com/chapter/10.1007/978-3-030-96957-8_34 - **Total size:** 533 MB - **Train size:** 340 MB - **Test size:** 193 MB ### Dataset Summary The Machine Paraphrase Corpus (MPC) consists of ~200k examples of original, and paraphrases using two online paraphrasing tools. It uses two paraphrasing tools (SpinnerChief, SpinBot) on three source texts (Wikipedia, arXiv, student theses). The examples are **not** aligned, i.e., we sample different paragraphs for originals and paraphrased versions. ### How to use it You can load the dataset using the `load_dataset` function: ```python from datasets import load_dataset ds = load_dataset("jpwahle/machine-paraphrase-dataset") print(ds[0]) #OUTPUT: { 'text': 'The commemoration was revealed on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in participation At the divulging function Lord Fortescue gave a discourse in which he evaluated that 11600 people from Devon had been slaughtered while serving in the war He later expressed that somewhere in the range of 63700 8000 regulars 36700 volunteers and 19000 recruits had served in the military The names of the fallen were recorded on a move of respect of which three duplicates were made one for Exeter Cathedral one to be held by the district chamber and one which the Prince of Wales put in an empty in the base of the war dedication The rulers visit created impressive energy in the zone A large number of individuals lined the road to welcome his motorcade and shops on the High Street hung out pennants with inviting messages After the uncovering Edward went through ten days visiting the neighborhood ', 'label': 1, 'dataset': 'wikipedia', 'method': 'spinbot' } ``` ### Supported Tasks and Leaderboards Paraphrase Identification ### Languages English ## Dataset Structure ### Data Instances ```json { 'text': 'The commemoration was revealed on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in participation At the divulging function Lord Fortescue gave a discourse in which he evaluated that 11600 people from Devon had been slaughtered while serving in the war He later expressed that somewhere in the range of 63700 8000 regulars 36700 volunteers and 19000 recruits had served in the military The names of the fallen were recorded on a move of respect of which three duplicates were made one for Exeter Cathedral one to be held by the district chamber and one which the Prince of Wales put in an empty in the base of the war dedication The rulers visit created impressive energy in the zone A large number of individuals lined the road to welcome his motorcade and shops on the High Street hung out pennants with inviting messages After the uncovering Edward went through ten days visiting the neighborhood ', 'label': 1, 'dataset': 'wikipedia', 'method': 'spinbot' } ``` ### Data Fields | Feature | Description | | --- | --- | | `text` | The unique identifier of the paper. | | `label` | Whether it is a paraphrase (1) or the original (0). | | `dataset` | The source dataset (Wikipedia, arXiv, or theses). | | `method` | The method used (SpinBot, SpinnerChief, original). | ### Data Splits - train (Wikipedia x Spinbot) - test ([Wikipedia, arXiv, theses] x [SpinBot, SpinnerChief]) ## Dataset Creation ### Curation Rationale Providing a resource for testing against machine-paraprhased plagiarism. ### Source Data #### Initial Data Collection and Normalization - Paragraphs from `featured articles` from the English Wikipedia dump - Paragraphs from full-text pdfs of arXMLiv - Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD). #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [Jan Philip Wahle](https://jpwahle.com/) ### Licensing Information The Machine Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms. ### Citation Information ```bib @inproceedings{10.1007/978-3-030-96957-8_34, title = {Identifying Machine-Paraphrased Plagiarism}, author = {Wahle, Jan Philip and Ruas, Terry and Folt{\'y}nek, Tom{\'a}{\v{s}} and Meuschke, Norman and Gipp, Bela}, year = 2022, booktitle = {Information for a Better World: Shaping the Global Future}, publisher = {Springer International Publishing}, address = {Cham}, pages = {393--413}, isbn = {978-3-030-96957-8}, editor = {Smits, Malte}, abstract = {Employing paraphrasing tools to conceal plagiarized text is a severe threat to academic integrity. To enable the detection of machine-paraphrased text, we evaluate the effectiveness of five pre-trained word embedding models combined with machine learning classifiers and state-of-the-art neural language models. We analyze preprints of research papers, graduation theses, and Wikipedia articles, which we paraphrased using different configurations of the tools SpinBot and SpinnerChief. The best performing technique, Longformer, achieved an average F1 score of 80.99{\%} (F1 = 99.68{\%} for SpinBot and F1 = 71.64{\%} for SpinnerChief cases), while human evaluators achieved F1 = 78.4{\%} for SpinBot and F1 = 65.6{\%} for SpinnerChief cases. We show that the automated classification alleviates shortcomings of widely-used text-matching systems, such as Turnitin and PlagScan.} } ``` ### Contributions Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset.
false
## Dataset Description - **Repository:** https://github.com/shuyanzhou/docprompting - **Paper:** [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/pdf/2207.05987.pdf) ### Dataset Summary This is the re-split of [CoNaLa](https://conala-corpus.github.io/) dataset. For each code snippet in the dev and test set, at least one function is held out from the training set. This split aims at testing a code generation model's capacity in generating *unseen* functions We further make sure that examples from the same StackOverflow post (same `question_id` before `-`) are in the same split. ### Supported Tasks and Leaderboards This dataset is used to evaluate code generations. ### Languages English - Python code. ## Dataset Structure ```python dataset = load_dataset("neulab/docpromting-conala") DatasetDict({ train: Dataset({ features: ['nl', 'cmd', 'question_id', 'cmd_name', 'oracle_man', 'canonical_cmd'], num_rows: 2135 }) test: Dataset({ features: ['nl', 'cmd', 'question_id', 'cmd_name', 'oracle_man', 'canonical_cmd'], num_rows: 543 }) validation: Dataset({ features: ['nl', 'cmd', 'question_id', 'cmd_name', 'oracle_man', 'canonical_cmd'], num_rows: 201 }) }) }) code_docs = load_dataset("neulab/docprompting-conala", "docs") DatasetDict({ train: Dataset({ features: ['doc_id', 'doc_content'], num_rows: 34003 }) }) ``` ### Data Fields train/dev/test: - nl: The natural language intent - cmd: The reference code snippet - question_id: `x-y`where `x` is the StackOverflow post ID - oracle_man: The `doc_id` of the functions used in the reference code snippet. The corresponding contents are in `doc` split - canonical_cmd: The canonical version reference code snippet docs: - doc_id: the id of a doc - doc_content: the content of the doc ## Dataset Creation The dataset was crawled from Stack Overflow, automatically filtered, then curated by annotators. For more details, please refer to the original [paper](https://arxiv.org/pdf/1805.08949.pdf) ### Citation Information ``` @article{zhou2022doccoder, title={DocCoder: Generating Code by Retrieving and Reading Docs}, author={Zhou, Shuyan and Alon, Uri and Xu, Frank F and JIang, Zhengbao and Neubig, Graham}, journal={arXiv preprint arXiv:2207.05987}, year={2022} } ```
false
# Dataset Card for "newsroom" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://lil.nlp.cornell.edu/newsroom/index.html](https://lil.nlp.cornell.edu/newsroom/index.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 5.30 GB - **Total amount of disk used:** 5.30 GB ### Dataset Summary NEWSROOM is a large dataset for training and evaluating summarization systems. It contains 1.3 million articles and summaries written by authors and editors in the newsrooms of 38 major publications. Dataset features includes: - text: Input news text. - summary: Summary for the news. And additional features: - title: news title. - url: url of the news. - date: date of the article. - density: extractive density. - coverage: extractive coverage. - compression: compression ratio. - density_bin: low, medium, high. - coverage_bin: extractive, abstractive. - compression_bin: low, medium, high. This dataset can be downloaded upon requests. Unzip all the contents "train.jsonl, dev.josnl, test.jsonl" to the `tfds` folder. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages English (`en`). ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 5.30 GB - **Total amount of disk used:** 5.30 GB An example of 'train' looks as follows. ``` { "compression": 33.880001068115234, "compression_bin": "medium", "coverage": 1.0, "coverage_bin": "high", "date": "200600000", "density": 11.720000267028809, "density_bin": "extractive", "summary": "some summary 1", "text": "some text 1", "title": "news title 1", "url": "url.html" } ``` ### Data Fields The data fields are the same among all splits. #### default - `text`: a `string` feature. - `summary`: a `string` feature. - `title`: a `string` feature. - `url`: a `string` feature. - `date`: a `string` feature. - `density_bin`: a `string` feature. - `coverage_bin`: a `string` feature. - `compression_bin`: a `string` feature. - `density`: a `float32` feature. - `coverage`: a `float32` feature. - `compression`: a `float32` feature. ### Data Splits | name |train |validation| test | |-------|-----:|---------:|-----:| |default|995041| 108837|108862| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information https://cornell.qualtrics.com/jfe/form/SV_6YA3HQ2p75XH4IR This Dataset Usage Agreement ("Agreement") is a legal agreement with the Cornell Newsroom Summaries Team ("Newsroom") for the Dataset made available to the individual or entity ("Researcher") exercising rights under this Agreement. "Dataset" includes all text, data, information, source code, and any related materials, documentation, files, media, updates or revisions. The Dataset is intended for non-commercial research and educational purposes only, and is made available free of charge without extending any license or other intellectual property rights. By downloading or using the Dataset, the Researcher acknowledges that they agree to the terms in this Agreement, and represent and warrant that they have authority to do so on behalf of any entity exercising rights under this Agreement. The Researcher accepts and agrees to be bound by the terms and conditions of this Agreement. If the Researcher does not agree to this Agreement, they may not download or use the Dataset. By sharing content with Newsroom, such as by submitting content to this site or by corresponding with Newsroom contributors, the Researcher grants Newsroom the right to use, reproduce, display, perform, adapt, modify, distribute, have distributed, and promote the content in any form, anywhere and for any purpose, such as for evaluating and comparing summarization systems. Nothing in this Agreement shall obligate Newsroom to provide any support for the Dataset. Any feedback, suggestions, ideas, comments, improvements given by the Researcher related to the Dataset is voluntarily given, and may be used by Newsroom without obligation or restriction of any kind. The Researcher accepts full responsibility for their use of the Dataset and shall defend indemnify, and hold harmless Newsroom, including their employees, trustees, officers, and agents, against any and all claims arising from the Researcher's use of the Dataset. The Researcher agrees to comply with all laws and regulations as they relate to access to and use of the Dataset and Service including U.S. export jurisdiction and other U.S. and international regulations. THE DATASET IS PROVIDED "AS IS." NEWSROOM DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. WITHOUT LIMITATION OF THE ABOVE, NEWSROOM DISCLAIMS ANY WARRANTY THAT DATASET IS BUG OR ERROR-FREE, AND GRANTS NO WARRANTY REGARDING ITS USE OR THE RESULTS THEREFROM INCLUDING, WITHOUT LIMITATION, ITS CORRECTNESS, ACCURACY, OR RELIABILITY. THE DATASET IS NOT WARRANTIED TO FULFILL ANY PARTICULAR PURPOSES OR NEEDS. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT SHALL NEWSROOM BE LIABLE FOR ANY LOSS, DAMAGE OR INJURY, DIRECT AND INDIRECT, INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER FOR BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE) OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, INCLUDING BUT NOT LIMITED TO LOSS OF PROFITS, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THESE LIMITATIONS SHALL APPLY NOTWITHSTANDING ANY FAILURE OF ESSENTIAL PURPOSE OF ANY LIMITED REMEDY. This Agreement is effective until terminated. Newsroom reserves the right to terminate the Researcher's access to the Dataset at any time. If the Researcher breaches this Agreement, the Researcher's rights to use the Dataset shall terminate automatically. The Researcher will immediately cease all use and distribution of the Dataset and destroy any copies or portions of the Dataset in their possession. This Agreement is governed by the laws of the State of New York, without regard to conflict of law principles. All terms and provisions of this Agreement shall, if possible, be construed in a manner which makes them valid, but in the event any term or provision of this Agreement is found by a court of competent jurisdiction to be illegal or unenforceable, the validity or enforceability of the remainder of this Agreement shall not be affected. This Agreement is the complete and exclusive agreement between the parties with respect to its subject matter and supersedes all prior or contemporaneous oral or written agreements or understandings relating to the subject matter. ### Citation Information ``` @inproceedings{N18-1065, author = {Grusky, Max and Naaman, Mor and Artzi, Yoav}, title = {NEWSROOM: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies}, booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year = {2018}, } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@yoavartzi](https://github.com/yoavartzi), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
false
# Dataset Card for Grail QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Grail QA](https://dki-lab.github.io/GrailQA/) - **Repository:** - **Paper:** [GrailQA paper (Gu et al. '20)](https://arxiv.org/abs/2011.07743) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary #### What is GrailQA? Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot. #### Why GrailQA? GrailQA is by far the largest crowdsourced KBQA dataset with questions of high diversity (i.e., questions in GrailQA can have up to 4 relations and optionally have a function from counting, superlatives and comparatives). It also has the highest coverage over Freebase; it widely covers 3,720 relations and 86 domains from Freebase. Last but not least, our meticulous data split allows GrailQA to test not only i.i.d. generalization, but also compositional generalization and zero-shot generalization, which are critical for practical KBQA systems. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English and Graph query ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `qid` (`str`) - `question` (`str`) - `answer` (`List`): Defaults to `[]` in test split. - `answer_type` (`str`) - `answer_argument` (`str`) - `entity_name` (`str`): Defauts to `""` if `answer_type` is not `Entity`. - `function` (`string`): Defaults to `""` in test split. - `num_node` (`int`): Defaults to `-1` in test split. - `num_edge` (`int`): Defaults to `-1` in test split. - `graph_query` (`Dict`) - `nodes` (`List`): Defaults to `[]` in test split. - `nid` (`int`) - `node_type` (`str`) - `id` (`str`) - `class` (`str`) - `friendly_name` (`str`) - `question_node` (`int`) - `function` (`str`) - `edges` (`List`): Defaults to `[]` in test split. - `start` (`int`) - `end` (`int`) - `relation` (`str`) - `friendly_name` (`str`) - `sqarql_query` (`str`): Defaults to `""` in test split. - `domains` (`List[str]`): Defaults to `[]` in test split. - `level` (`str`): Only available in validation split. Defaults to `""` in others. - `s_expression` (`str`): Defaults to `""` in test split. **Notes:** Only `qid` and `question` available in test split. ### Data Splits Dataset Split | Number of Instances in Split --------------|-------------------------------------------- Train | 44,337 Validation | 6,763 Test | 13,231 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
false
# Dataset Card for "GPT4All-Clean" The GPT4All-Clean dataset is a modified version of the original GPT4All dataset. It contains 374,269 examples, which are mostly converted to markdown format to improve consistency and compatibility with other datasets that use markdown formatting. The dataset is smaller than the original dataset, which has 437,604 examples, due to the removal of certain content. Specifically, all examples containing the phrase "As an AI language model" have been removed, as well as examples containing the string "html" to minimize potential confusion between real and non-real HTML code for the parser used to clean the examples. The intention behind these modifications is to enhance the dataset's overall quality, making it more suitable for use in research and applications.
false
# Dataset Card for Afrikaans Ner Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Afrikaans Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/299) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za) ### Dataset Summary The Afrikaans Ner Corpus is an Afrikaans dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Afrikaans language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Afrikaans. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [0, 0, 0, 0, 0], 'tokens': ['Vertaling', 'van', 'die', 'inligting', 'in'] } ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Afrikaans. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from gov.za websites. [More Information Needed] #### Who are the source language producers? The data was produced by writers of South African government websites - gov.za [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated during the NCHLT text resource development project. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: [more information](http://www.nwu.ac.za/ctext) ### Licensing Information The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode) ### Citation Information ``` @inproceedings{afrikaans_ner_corpus, author = { Gerhard van Huyssteen and Martin Puttkammer and E.B. Trollip and J.C. Liversage and Roald Eiselen}, title = {NCHLT Afrikaans Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/299}, } ``` ### Contributions Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
true
# Dataset Card for SLUE ## Table of Contents - [Dataset Card for SLUE](#dataset-card-for-slue) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Automatic Speech Recognition (ASR)](#automatic-speech-recognition-asr) - [Named Entity Recognition (NER)](#named-entity-recognition-ner) - [Sentiment Analysis (SA)](#sentiment-analysis-sa) - [How-to-submit for your test set evaluation](#how-to-submit-for-your-test-set-evaluation) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [voxpopuli](#voxpopuli) - [voxceleb](#voxceleb) - [Data Fields](#data-fields) - [voxpopuli](#voxpopuli-1) - [voxceleb](#voxceleb-1) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [SLUE-VoxPopuli Dataset](#slue-voxpopuli-dataset) - [SLUE-VoxCeleb Dataset](#slue-voxceleb-dataset) - [Original License of OXFORD VGG VoxCeleb Dataset](#original-license-of-oxford-vgg-voxceleb-dataset) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://asappresearch.github.io/slue-toolkit](https://asappresearch.github.io/slue-toolkit) - **Repository:** [https://github.com/asappresearch/slue-toolkit/](https://github.com/asappresearch/slue-toolkit/) - **Paper:** [https://arxiv.org/pdf/2111.10367.pdf](https://arxiv.org/pdf/2111.10367.pdf) - **Leaderboard:** [https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html](https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html) - **Size of downloaded dataset files:** 1.95 GB - **Size of the generated dataset:** 9.59 MB - **Total amount of disk used:** 1.95 GB ### Dataset Summary We introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to - Track research progress on multiple SLU tasks - Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks - Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use. For this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to [Toolkit](https://github.com/asappresearch/slue-toolkit) and [Paper](https://arxiv.org/pdf/2111.10367.pdf) for more details. ### Supported Tasks and Leaderboards #### Automatic Speech Recognition (ASR) Although this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER). #### Named Entity Recognition (NER) Named entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1. #### Sentiment Analysis (SA) Sentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.[More Information Needed] #### How-to-submit for your test set evaluation See here https://asappresearch.github.io/slue-toolkit/how-to-submit.html ### Languages The language data in SLUE is in English. ## Dataset Structure ### Data Instances #### voxpopuli - **Size of downloaded dataset files:** 398.45 MB - **Size of the generated dataset:** 5.81 MB - **Total amount of disk used:** 404.26 MB An example of 'train' looks as follows. ``` {'id': '20131007-0900-PLENARY-19-en_20131007-21:26:04_3', 'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/e35757b0971ac7ff5e2fcdc301bba0364857044be55481656e2ade6f7e1fd372/slue-voxpopuli/fine-tune/20131007-0900-PLENARY-19-en_20131007-21:26:04_3.ogg', 'array': array([ 0.00132601, 0.00058881, -0.00052187, ..., 0.06857217, 0.07835515, 0.07845446], dtype=float32), 'sampling_rate': 16000}, 'speaker_id': 'None', 'normalized_text': 'two thousand and twelve for instance the new brussels i regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in europe even if the employer is domiciled outside europe. the commission will', 'raw_text': '2012. For instance, the new Brussels I Regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in Europe, even if the employer is domiciled outside Europe. The Commission will', 'raw_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'], 'start': [227, 177, 28, 0], 'length': [6, 6, 21, 4]}, 'normalized_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'], 'start': [243, 194, 45, 0], 'length': [6, 6, 21, 23]}, 'raw_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'], 'start': [227, 177, 28, 0], 'length': [6, 6, 21, 4]}, 'normalized_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'], 'start': [243, 194, 45, 0], 'length': [6, 6, 21, 23]}} ``` #### voxceleb - **Size of downloaded dataset files:** 1.55 GB - **Size of the generated dataset:** 3.78 MB - **Total amount of disk used:** 1.55 GB An example of 'train' looks as follows. ``` {'id': 'id10059_229vKIGbxrI_00004', 'audio': {'path': '/Users/felixwu/.cache/huggingface/datasets/downloads/extracted/400facb6d2f2496ebcd58a5ffe5fbf2798f363d1b719b888d28a29b872751626/slue-voxceleb/fine-tune_raw/id10059_229vKIGbxrI_00004.flac', 'array': array([-0.00442505, -0.00204468, 0.00628662, ..., 0.00158691, 0.00100708, 0.00033569], dtype=float32), 'sampling_rate': 16000}, 'speaker_id': 'id10059', 'normalized_text': 'of god what is a creator the almighty that uh', 'sentiment': 'Neutral', 'start_second': 0.45, 'end_second': 4.52} ``` ### Data Fields #### voxpopuli - `id`: a `string` id of an instance. - `audio`: audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `speaker_id`: a `string` of the speaker id. - `raw_text`: a `string` feature that contains the raw transcription of the audio. - `normalized_text`: a `string` feature that contains the normalized transcription of the audio which is **used in the standard evaluation**. - `raw_ner`: the NER annotation of the `raw_text` using the same 18 NER classes as OntoNotes. - `normalized_ner`: the NER annotation of the `normalized_text` using the same 18 NER classes as OntoNotes. - `raw_combined_ner`: the NER annotation of the `raw_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`). - `normalized_combined_ner`: the NER annotation of the `normalized_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`) which is **used in the standard evaluation**. Each NER annotation is a dictionary containing three lists: `type`, `start`, and `length`. `type` is a list of the NER tag types. `start` is a list of the start character position of each named entity in the corresponding text. `length` is a list of the number of characters of each named entity. #### voxceleb - `id`: a `string` id of an instance. - `audio`: audio feature of the raw audio. Please use `start_second` and `end_second` to crop the transcribed segment. For example, `dataset[0]["audio"]["array"][int(dataset[0]["start_second"] * dataset[0]["audio"]["sample_rate"]):int(dataset[0]["end_second"] * dataset[0]["audio"]["sample_rate"])]`. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `speaker_id`: a `string` of the speaker id. - `normalized_text`: a `string` feature that contains the transcription of the audio segment. - `sentiment`: a `string` feature which can be `Negative`, `Neutral`, or `Positive`. - `start_second`: a `float` feature that specifies the start second of the audio segment. - `end_second`: a `float` feature that specifies the end second of the audio segment. ### Data Splits | |train|validation|test| |---------|----:|---------:|---:| |voxpopuli| 5000| 1753|1842| |voxceleb | 5777| 1454|3553| Here we use the standard split names in Huggingface's datasets, so the `train` and `validation` splits are the original `fine-tune` and `dev` splits of SLUE datasets, respectively. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information #### SLUE-VoxPopuli Dataset SLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (https://www.europarl.europa.eu/legal-notice/en/) Additionally, we provide named entity annotation (normalized_ner and raw_ner column in .tsv files) and it is covered with the same license as CC0. #### SLUE-VoxCeleb Dataset SLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset. ##### Original License of OXFORD VGG VoxCeleb Dataset VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube. VoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube. The speakers span a wide range of different ethnicities, accents, professions and ages. We provide Youtube URLs, associated face detections, and timestamps, as well as cropped audio segments and cropped face videos from the dataset. The copyright of both the original and cropped versions of the videos remains with the original owners. The data is covered under a Creative Commons Attribution 4.0 International license (Please read the license terms here. https://creativecommons.org/licenses/by/4.0/). Downloading this dataset implies agreement to follow the same conditions for any modification and/or re-distribution of the dataset in any form. Additionally any entity using this dataset agrees to the following conditions: THIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Please cite [1,2] below if you make use of the dataset. [1] J. S. Chung, A. Nagrani, A. Zisserman VoxCeleb2: Deep Speaker Recognition INTERSPEECH, 2018. [2] A. Nagrani, J. S. Chung, A. Zisserman VoxCeleb: a large-scale speaker identification dataset INTERSPEECH, 2017 ### Citation Information ``` @inproceedings{shon2022slue, title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech}, author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J}, booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7927--7931}, year={2022}, organization={IEEE} } ``` ### Contributions Thanks to [@fwu-asapp](https://github.com/fwu-asapp) for adding this dataset.
true
# Dataset Card for ArSarcasm ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [GitHub](https://github.com/iabufarha/ArSarcasm) - **Paper:** https://www.aclweb.org/anthology/2020.osact-1.5/ ### Dataset Summary ArSarcasm is a new Arabic sarcasm detection dataset. The dataset was created using previously available Arabic sentiment analysis datasets ([SemEval 2017](https://www.aclweb.org/anthology/S17-2088.pdf) and [ASTD](https://www.aclweb.org/anthology/D15-1299.pdf)) and adds sarcasm and dialect labels to them. The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic. For more details, please check the paper [From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset](https://www.aclweb.org/anthology/2020.osact-1.5/) ### Supported Tasks and Leaderboards You can get more information about an Arabic sarcasm tasks and leaderboard [here](https://sites.google.com/view/ar-sarcasm-sentiment-detection/). ### Languages Arabic (multiple dialects) ## Dataset Structure ### Data Instances ```javascript {'dialect': 1, 'original_sentiment': 0, 'sarcasm': 0, 'sentiment': 0, 'source': 'semeval', 'tweet': 'نصيحه ما عمرك اتنزل لعبة سوبر ماريو مش زي ما كنّا متوقعين الله يرحم ايامات السيقا والفاميلي #SuperMarioRun'} ``` ### Data Fields - tweet: the original tweet text - sarcasm: 0 for non-sarcastic, 1 for sarcastic - sentiment: 0 for negative, 1 for neutral, 2 for positive - original_sentiment: 0 for negative, 1 for neutral, 2 for positive - source: the original source of tweet: SemEval or ASTD - dialect: 0 for Egypt, 1 for Gulf, 2 for Levant, 3 for Magreb, 4 for Modern Standard Arabic (MSA) ### Data Splits The training set contains 8,437 tweets, while the test set contains 2,110 tweets. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them. #### Who are the source language producers? SemEval 2017 and ASTD ### Annotations #### Annotation process For the annotation process, we used Figure-Eight crowdsourcing platform. Our main objective was to annotate the data for sarcasm detection, but due to the challenges imposed by dialectal variations, we decided to add the annotation for dialects. We also include a new annotation for sentiment labels in order to have a glimpse of the variability and subjectivity between different annotators. Thus, the annotators were asked to provide three labels for each tweet as the following: - Sarcasm: sarcastic or non-sarcastic. - Sentiment: positive, negative or neutral. - Dialect: Egyptian, Gulf, Levantine, Maghrebi or Modern Standard Arabic (MSA). #### Who are the annotators? Figure-Eight crowdsourcing platform ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators - Ibrahim Abu-Farha - Walid Magdy ### Licensing Information MIT ### Citation Information ``` @inproceedings{abu-farha-magdy-2020-arabic, title = "From {A}rabic Sentiment Analysis to Sarcasm Detection: The {A}r{S}arcasm Dataset", author = "Abu Farha, Ibrahim and Magdy, Walid", booktitle = "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resource Association", url = "https://www.aclweb.org/anthology/2020.osact-1.5", pages = "32--39", language = "English", ISBN = "979-10-95546-51-1", } ``` ### Contributions Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset.
false
# Dataset Card for the Klexikon Dataset ## Table of Contents - [Version History](#version-history) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Version History - **v0.3** (2022-09-01): Removing some five samples from the dataset due to duplication conflicts with other samples. - **v0.2** (2022-02-28): Updated the files to no longer contain empty sections and removing otherwise empty lines at the end of files. Also removing lines with some sort of coordinate. - **v0.1** (2022-01-19): Initial data release on Huggingface datasets. ## Dataset Description - **Homepage:** [N/A] - **Repository:** [Klexikon repository](https://github.com/dennlinger/klexikon) - **Paper:** [Klexikon: A German Dataset for Joint Summarization and Simplification](https://arxiv.org/abs/2201.07198) - **Leaderboard:** [N/A] - **Point of Contact:** [Dennis Aumiller](mailto:dennis.aumiller@gmail.com) ### Dataset Summary The Klexikon dataset is a German resource of document-aligned texts between German Wikipedia and the children's lexicon "Klexikon". The dataset was created for the purpose of joint text simplification and summarization, and contains almost 2900 aligned article pairs. Notably, the children's articles use a simpler language than the original Wikipedia articles; this is in addition to a clear length discrepancy between the source (Wikipedia) and target (Klexikon) domain. ### Supported Tasks and Leaderboards - `summarization`: The dataset can be used to train a model for summarization. In particular, it poses a harder challenge than some of the commonly used datasets (CNN/DailyMail), which tend to suffer from positional biases in the source text. This makes it very easy to generate high (ROUGE) scoring solutions, by simply taking the leading 3 sentences. Our dataset provides a more challenging extraction task, combined with the additional difficulty of finding lexically appropriate simplifications. - `simplification`: While not currently supported by the HF task board, text simplification is concerned with the appropriate representation of a text for disadvantaged readers (e.g., children, language learners, dyslexic,...). For scoring, we ran preliminary experiments based on [ROUGE](https://huggingface.co/metrics/rouge), however, we want to cautiously point out that ROUGE is incapable of accurately depicting simplification appropriateness. We combined this with looking at Flesch readability scores, as implemented by [textstat](https://github.com/shivam5992/textstat). Note that simplification metrics such as [SARI](https://huggingface.co/metrics/sari) are not applicable here, since they require sentence alignments, which we do not provide. ### Languages The associated BCP-47 code is `de-DE`. The text of the articles is in German. Klexikon articles are further undergoing a simple form of peer-review before publication, and aim to simplify language for 8-13 year old children. This means that the general expected text difficulty for Klexikon articles is lower than Wikipedia's entries. ## Dataset Structure ### Data Instances One datapoint represents the Wikipedia text (`wiki_text`), as well as the Klexikon text (`klexikon_text`). Sentences are separated by newlines for both datasets, and section headings are indicated by leading `==` (or `===` for subheadings, `====` for sub-subheading, etc.). Further, it includes the `wiki_url` and `klexikon_url`, pointing to the respective source texts. Note that the original articles were extracted in April 2021, so re-crawling the texts yourself will likely change some content. Lastly, we include a unique identifier `u_id` as well as the page title `title` of the Klexikon page. Sample (abridged texts for clarity): ``` { "u_id": 0, "title": "ABBA", "wiki_url": "https://de.wikipedia.org/wiki/ABBA", "klexikon_url": "https://klexikon.zum.de/wiki/ABBA", "wiki_sentences": [ "ABBA ist eine schwedische Popgruppe, die aus den damaligen Paaren Agnetha Fältskog und Björn Ulvaeus sowie Benny Andersson und Anni-Frid Lyngstad besteht und sich 1972 in Stockholm formierte.", "Sie gehört mit rund 400 Millionen verkauften Tonträgern zu den erfolgreichsten Bands der Musikgeschichte.", "Bis in die 1970er Jahre hatte es keine andere Band aus Schweden oder Skandinavien gegeben, der vergleichbare Erfolge gelungen waren.", "Trotz amerikanischer und britischer Dominanz im Musikgeschäft gelang der Band ein internationaler Durchbruch.", "Sie hat die Geschichte der Popmusik mitgeprägt.", "Zu ihren bekanntesten Songs zählen Mamma Mia, Dancing Queen und The Winner Takes It All.", "1982 beendeten die Gruppenmitglieder aufgrund privater Differenzen ihre musikalische Zusammenarbeit.", "Seit 2016 arbeiten die vier Musiker wieder zusammen an neuer Musik, die 2021 erscheinen soll.", ], "klexikon_sentences": [ "ABBA war eine Musikgruppe aus Schweden.", "Ihre Musikrichtung war die Popmusik.", "Der Name entstand aus den Anfangsbuchstaben der Vornamen der Mitglieder, Agnetha, Björn, Benny und Anni-Frid.", "Benny Andersson und Björn Ulvaeus, die beiden Männer, schrieben die Lieder und spielten Klavier und Gitarre.", "Anni-Frid Lyngstad und Agnetha Fältskog sangen." ] }, ``` ### Data Fields * `u_id` (`int`): A unique identifier for each document pair in the dataset. 0-2349 are reserved for training data, 2350-2623 for testing, and 2364-2897 for validation. * `title` (`str`): Title of the Klexikon page for this sample. * `wiki_url` (`str`): URL of the associated Wikipedia article. Notably, this is non-trivial, since we potentially have disambiguated pages, where the Wikipedia title is not exactly the same as the Klexikon one. * `klexikon_url` (`str`): URL of the Klexikon article. * `wiki_text` (`List[str]`): List of sentences of the Wikipedia article. We prepare a pre-split document with spacy's sentence splitting (model: `de_core_news_md`). Additionally, please note that we do not include page contents outside of `<p>` tags, which excludes lists, captions and images. * `klexikon_text` (`List[str]`): List of sentences of the Klexikon article. We apply the same processing as for the Wikipedia texts. ### Data Splits We provide a stratified split of the dataset, based on the length of the respective Wiki article/Klexikon article pair (according to number of sentences). The x-axis represents the length of the Wikipedia article, and the y-axis the length of the Klexikon article. We segment the coordinate systems into rectangles of shape `(100, 10)`, and randomly sample a split of 80/10/10 for training/validation/test from each rectangle to ensure stratification. In case of rectangles with less than 10 entries, we put all samples into training. The final splits have the following size: * 2350 samples for training * 274 samples for validation * 274 samples for testing ## Dataset Creation ### Curation Rationale As previously described, the Klexikon resource was created as an attempt to bridge the two fields of text summarization and text simplification. Previous datasets suffer from either one or more of the following shortcomings: * They primarily focus on input/output pairs of similar lengths, which does not reflect longer-form texts. * Data exists primarily for English, and other languages are notoriously understudied. * Alignments exist for sentence-level, but not document-level. This dataset serves as a starting point to investigate the feasibility of end-to-end simplification systems for longer input documents. ### Source Data #### Initial Data Collection and Normalization Data was collected from [Klexikon](klexikon.zum.de), and afterwards aligned with corresponding texts from [German Wikipedia](de.wikipedia.org). Specifically, the collection process was performed in April 2021, and 3145 articles could be extracted from Klexikon back then. Afterwards, we semi-automatically align the articles with Wikipedia, by looking up articles with the same title. For articles that do not exactly match, we manually review their content, and decide to match to an appropriate substitute if the content can be matched by at least 66% of the Klexikon paragraphs. Similarly, we proceed to manually review disambiguation pages on Wikipedia. We extract only full-text content, excluding figures, captions, and list elements from the final text corpus, and only retain articles for which the respective Wikipedia document consists of at least 15 paragraphs after pre-processing. #### Who are the source language producers? The language producers are contributors to Klexikon and Wikipedia. No demographic information was available from the data sources. ### Annotations #### Annotation process Annotations were performed by manually reviewing the URLs of the ambiguous article pairs. No annotation platforms or existing tools were used in the process. Otherwise, articles were matched based on the exact title. #### Who are the annotators? The manually aligned articles were reviewed by the dataset author (Dennis Aumiller). ### Personal and Sensitive Information Since Klexikon and Wikipedia are public encyclopedias, no further personal or sensitive information is included. We did not investigate to what extent information about public figures is included in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset Accessibility on the web is still a big issue, particularly for disadvantaged readers. This dataset has the potential to strengthen text simplification systems, which can improve the situation. In terms of language coverage, this dataset also has a beneficial impact on the availability of German data. Potential negative biases include the problems of automatically aligned articles. The alignments may never be 100% perfect, and can therefore cause mis-aligned articles (or associations), despite the best of our intentions. ### Discussion of Biases We have not tested whether any particular bias towards a specific article *type* (i.e., "person", "city", etc.) exists. Similarly, we attempted to present an unbiased (stratified) split for validation and test set, but given that we only cover around 2900 articles, it is possible that these articles represent a particular focal lense on the overall distribution of lexical content. ### Other Known Limitations Since the articles were written independently of each other, it is not guaranteed that there exists an exact coverage of each sentence in the simplified article, which could also stem from the fact that sometimes Wikipedia pages have separate article pages for aspects (e.g., the city of "Aarhus" has a separate page for its art museum (ARoS). However, Klexikon lists content and description for ARoS on the page of the city itself. ## Additional Information ### Dataset Curators The dataset was curated only by the author of this dataset, Dennis Aumiller. ### Licensing Information Klexikon and Wikipedia make their textual contents available under the CC BY-SA license, which will be inherited for this dataset. ### Citation Information If you use our dataset or associated code, please cite our paper: ``` @inproceedings{aumiller-gertz-2022-klexikon, title = "Klexikon: A {G}erman Dataset for Joint Summarization and Simplification", author = "Aumiller, Dennis and Gertz, Michael", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.288", pages = "2693--2701" } ```
false
cc-by-nc-sa-4.0--- annotations_creators: - machine-generated language_creators: - machine-generated language: - en multilinguality: - monolingual size_categories: - unknown source_datasets: - original task_categories: - text-retrieval - text-generation task_ids: [] pretty_name: rebel-dataset tags: - relation-extraction - conditional-text-generation --- # Dataset Card for REBEL dataset ## Table of Contents - [Dataset Card for REBEL dataset](#dataset-card-for-rebel) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/Babelscape/rebel](https://github.com/Babelscape/rebel) - **Paper:** [https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf) - **Point of Contact:** [huguetcabot@babelscape.com](huguetcabot@babelscape.com) ### Dataset Summary Dataset created for [REBEL](https://huggingface.co/Babelscape/rebel-large) dataset from interlinking Wikidata and Wikipedia for Relation Extraction, filtered using NLI. ### Supported Tasks and Leaderboards - `text-retrieval-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type. Success on this task is typically measured by achieving a *high* [F1](https://huggingface.co/metrics/F1). The [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following score: 74 Micro F1 and 51 Macro F1 for the 220 most frequent relation types. ### Languages The dataset is in English, from the English Wikipedia. ## Dataset Structure ### Data Instances REBEL - `Size of downloaded dataset files`: 1490.02 MB - `Size of the generated dataset`: 1199.27 MB - `Total amount of disk used`: 2689.29 MB ``` { 'id': 'Q82442-1', 'title': 'Arsène Lupin, Gentleman Burglar', 'context': 'Arsène Lupin , Gentleman Burglar is the first collection of stories by Maurice Leblanc recounting the adventures of Arsène Lupin , released on 10 June 1907 .', 'triplets': '<triplet> Arsène Lupin, Gentleman Burglar <subj> Maurice Leblanc <obj> author <triplet> Arsène Lupin <subj> Maurice Leblanc <obj> creator' } ``` The original data is in jsonl format and contains much more information. It is divided by Wikipedia articles instead of by sentence, and contains metadata about Wikidata entities, their boundaries in the text, how it was annotated, etc. For more information check the [paper repository](https://huggingface.co/Babelscape/rebel-large) and how it was generated using the Relation Extraction dataset pipeline, [cRocoDiLe](https://github.com/Babelscape/crocodile). ### Data Fields List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points. - `id`: ID of the instance. It contains a unique id matching to a Wikipedia page and a number separated by a hyphen indicating which sentence of the Wikipedia article it is. - `title`: Title of the Wikipedia page the sentence comes from. - `context`: Text from Wikipedia articles that serves as context for the Relation Extraction task. - `triplets`: Linearized version of the triplets present in the text, split by the use of special tokens. For more info on this linearization check the [paper](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf). ### Data Splits Test and Validation splits are each 5% of the original data. Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example: | | Tain | Valid | Test | | ----- | ------ | ----- | ---- | | Input Sentences | 3,120,296 | 172,860 | 173,601 | | Input Sentences (top 220 relation types as used in original paper) | 784,202 | 43,341 | 43,506 | | Number of Triplets (top 220 relation types as used in original paper) | 878,555 | 48,514 | 48,852 | ## Dataset Creation ### Curation Rationale This dataset was created to enable the training of a BART based model as pre-training phase for Relation Extraction as seen in the paper [REBEL: Relation Extraction By End-to-end Language generation](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf). ### Source Data Data comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation. #### Initial Data Collection and Normalization For the data collection, the dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile) insipired by [T-REx Pipeline](https://github.com/hadyelsahar/RE-NLG-Dataset) more details found at: [T-REx Website](https://hadyelsahar.github.io/t-rex/). The starting point is a Wikipedia dump as well as a Wikidata one. After the triplets are extracted, an NLI system was used to filter out those not entailed by the text. #### Who are the source language producers? Any Wikipedia and Wikidata contributor. ### Annotations #### Annotation process The dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile). #### Who are the annotators? Automatic annottations ### Personal and Sensitive Information All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset serves as a pre-training step for Relation Extraction models. It is distantly annotated, hence it should only be used as such. A model trained solely on this dataset may produce allucinations coming from the silver nature of the dataset. ### Discussion of Biases Since the dataset was automatically created from Wikipedia and Wikidata, it may reflect the biases withing those sources. For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic. For Wikidata, there are class imbalances, also resulting from Wikipedia. ### Other Known Limitations Not for now ## Additional Information ### Dataset Curators Pere-Lluis Huguet Cabot - Babelscape and Sapienza University of Rome, Italy Roberto Navigli - Sapienza University of Rome, Italy ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @inproceedings{huguet-cabot-navigli-2021-rebel, title = "REBEL: Relation Extraction By End-to-end Language generation", author = "Huguet Cabot, Pere-Llu{\'\i}s and Navigli, Roberto", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Online and in the Barceló Bávaro Convention Centre, Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf", } ``` ### Contributions Thanks to [@littlepea13](https://github.com/LittlePea13) for adding this dataset.
false
# Dataset Card for LC-QuAD 2.0 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://lc-quad.sda.tech/](http://lc-quad.sda.tech/) - **Repository:** https://github.com/AskNowQA/LC-QuAD2.0 - **Paper:** [LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia](https://api.semanticscholar.org/CorpusID:198166992) - **Point of Contact:** [Mohnish Dubey](mailto:dubey@cs.uni-bonn.de) or [Mohnish Dubey](mailto:dubey.mohnish5@gmail.com) - **Size of downloaded dataset files:** 3.87 MB - **Size of the generated dataset:** 20.73 MB - **Total amount of disk used:** 24.60 MB ### Dataset Summary LC-QuAD 2.0 is a Large Question Answering dataset with 30,000 pairs of question and its corresponding SPARQL query. The target knowledge base is Wikidata and DBpedia, specifically the 2018 version. Please see our paper for details about the dataset creation process and framework. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 3.87 MB - **Size of the generated dataset:** 20.73 MB - **Total amount of disk used:** 24.60 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "NNQT_question": "What is the {periodical literature} for {mouthpiece} of {Delta Air Lines}", "paraphrased_question": "What is Delta Air Line's periodical literature mouthpiece?", "question": "What periodical literature does Delta Air Lines use as a moutpiece?", "sparql_dbpedia18": "\"select distinct ?obj where { ?statement <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> <http://wikidata.dbpedia.org/resou...", "sparql_wikidata": " select distinct ?obj where { wd:Q188920 wdt:P2813 ?obj . ?obj wdt:P31 wd:Q1002697 } ", "subgraph": "simple question right", "template": " <S P ?O ; ?O instanceOf Type>", "template_index": 65, "uid": 19719 } ``` ### Data Fields The data fields are the same among all splits. #### default - `NNQT_question`: a `string` feature. - `uid`: a `int32` feature. - `subgraph`: a `string` feature. - `template_index`: a `int32` feature. - `question`: a `string` feature. - `sparql_wikidata`: a `string` feature. - `sparql_dbpedia18`: a `string` feature. - `template`: a `string` feature. - `paraphrased_question`: a `string` feature. ### Data Splits | name |train|test| |-------|----:|---:| |default|19293|4781| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information LC-QuAD 2.0 is licensed under a [Creative Commons Attribution 3.0 Unported License](http://creativecommons.org/licenses/by/3.0/deed.en_US). ### Citation Information ``` @inproceedings{dubey2017lc2, title={LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia}, author={Dubey, Mohnish and Banerjee, Debayan and Abdelkawi, Abdelrahman and Lehmann, Jens}, booktitle={Proceedings of the 18th International Semantic Web Conference (ISWC)}, year={2019}, organization={Springer} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
false
<div align="center"> <img width="640" alt="keremberke/satellite-building-segmentation" src="https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['building'] ``` ### Number of Images ```json {'train': 6764, 'valid': 1934, 'test': 967} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/satellite-building-segmentation", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1](https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1?ref=roboflow2huggingface) ### Citation ``` @misc{ buildings-instance-segmentation_dataset, title = { Buildings Instance Segmentation Dataset }, type = { Open Source Dataset }, author = { Roboflow Universe Projects }, howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation } }, url = { https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2023 }, month = { jan }, note = { visited on 2023-01-18 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on January 16, 2023 at 9:09 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand and search unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time For state of the art Computer Vision training notebooks you can use with this dataset, visit https://github.com/roboflow/notebooks To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com The dataset includes 9665 images. Buildings are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) No image augmentation techniques were applied.
false
# Dataset Card for HindEnCorp ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-625F-0 - **Repository:** https://lindat.mff.cuni.cz/repository/xmlui/ - **Paper:** http://www.lrec-conf.org/proceedings/lrec2014/pdf/835_Paper.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary HindEnCorp parallel texts (sentence-aligned) come from the following sources: Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008). Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi. EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages. Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.  For the current release, we are extending the parallel corpus using these sources: Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi. TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available. The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus. Launchpad.net is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files. Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Hindi, English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields HindEncorp Columns: - source identifier (where do the segments come from) - alignment type (number of English segments - number of Hindi segments) - alignment quality, which is one of the following: "manual" ... for sources that were sentence-aligned manually "implied" ... for sources where one side was constructed by translating segment by segment float ... a value somehow reflecting the goodness of the automatic alignment; not really reliable - English segment or segments - Hindi segment or segments Each of the segments field is in the plaintext or export format as described above. If there are more than one segments on a line (e.g. for lines with alignment type 2-1 where there are two English segments), then the segments are delimited with `<s>` in the text field. ### Data Splits [More Information Needed] ## Dataset Creation ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Daniel Pipes,Baker,Bojar,"Čermák and Rosen,2012","Birch et al., 2011; Post et al., 2012" ### Annotations #### Annotation process the 1st part of data TIDES was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008). #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators Bojar, Ondřej ; Diatka, Vojtěch ; Straňák, Pavel ; Tamchyna, Aleš ; Zeman, Daniel ### Licensing Information CC BY-NC-SA 3.0 ### Citation Information @InProceedings{hindencorp05:lrec:2014, author = {Ond{\v{r}}ej Bojar and Vojt{\v{e}}ch Diatka and Pavel Rychl{\'{y}} and Pavel Stra{\v{n}}{\'{a}}k and V{\'{\i}}t Suchomel and Ale{\v{s}} Tamchyna and Daniel Zeman}, title = "{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine Translation}", booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)}, year = {2014}, month = {may}, date = {26-31}, address = {Reykjavik, Iceland}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-8-4}, language = {english} } ### Contributions Thanks to [@rahul-art](https://github.com/rahul-art) for adding this dataset.
true
# Dataset Card for Swedish Reviews ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [swedish_reviews homepage](https://github.com/timpal0l/swedish-sentiment) - **Repository:** [swedish_reviews repository](https://github.com/timpal0l/swedish-sentiment) - **Point of Contact:** [Tim Isbister](mailto:timisbisters@gmail.com) ### Dataset Summary The dataset is scraped from various Swedish websites where reviews are present. The dataset consists of 103 482 samples split between `train`, `valid` and `test`. It is a sample of the full dataset, where this sample is balanced to the minority class (negative). The original data dump was heavly skewved to positive samples with a 95/5 ratio. ### Supported Tasks and Leaderboards This dataset can be used to evaluate sentiment classification on Swedish. ### Languages The text in the dataset is in Swedish. ## Dataset Structure ### Data Instances What a sample looks like: ``` { 'text': 'Jag tycker huggingface är ett grymt project!', 'label': 1, } ``` ### Data Fields - `text`: A text where the sentiment expression is present. - `label`: a int representing the label `0`for negative and `1`for positive. ### Data Splits The data is split into a training, validation and test set. The final split sizes are as follow: | Train | Valid | Test | | ------ | ----- | ---- | | 62089 | 20696 | 20697 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data Various Swedish websites with product reviews. #### Initial Data Collection and Normalization #### Who are the source language producers? Swedish ### Annotations [More Information Needed] #### Annotation process Automatically annotated based on user reviews on a scale 1-5, where 1-2 is considered `negative` and 4-5 is `positive`, 3 is skipped as it tends to be more neutral. #### Who are the annotators? The users who have been using the products. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information [More Information Needed] ### Dataset Curators The corpus was scraped by @timpal0l ### Licensing Information Research only. ### Citation Information No paper exists currently. ### Contributions Thanks to [@timpal0l](https://github.com/timpal0l) for adding this dataset.
false
# Inspec Benchmark Dataset for Keyphrase Generation ## About Inspec is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 2,000 abstracts of scientific papers collected from the [Inspec database](https://www.theiet.org/resources/inspec/). Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries). Details about the inspec dataset can be found in the original paper [(Hulth, 2003)][hulth-2003]. Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text. Details about the process can be found in `prmu.py`. ## Content and statistics The dataset is divided into the following three splits: | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen | | :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: | | Train | 1,000 | 141.7 | 9.79 | 78.00 | 9.85 | 6.22 | 5.93 | | Validation | 500 | 132.2 | 9.15 | 77.96 | 9.82 | 6.75 | 5.47 | | Test | 500 | 134.8 | 9.83 | 78.70 | 9.92 | 6.48 | 4.91 | The following data fields are available : - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **keyphrases**: list of reference keyphrases. - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. ## References - (Hulth, 2003) Anette Hulth. 2003. [Improved automatic keyword extraction given more linguistic knowledge](https://aclanthology.org/W03-1028). In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216-223. - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/). In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. [hulth-2003]: https://aclanthology.org/W03-1028/ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
true
# Dataset Card for S2ORC: The Semantic Scholar Open Research Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [S2ORC: The Semantic Scholar Open Research Corpus](https://allenai.org/data/s2orc) - **Repository:** [S2ORC: The Semantic Scholar Open Research Corpus](https://github.com/allenai/s2orc) - **Paper:** [S2ORC: The Semantic Scholar Open Research Corpus](https://www.aclweb.org/anthology/2020.acl-main.447/) - **Point of Contact:** [Kyle Lo](kylel@allenai.org) ### Dataset Summary A large corpus of 81.1M English-language academic papers spanning many academic disciplines. Rich metadata, paper abstracts, resolved bibliographic references, as well as structured full text for 8.1M open access papers. Full text annotated with automatically-detected inline mentions of citations, figures, and tables, each linked to their corresponding paper objects. Aggregated papers from hundreds of academic publishers and digital archives into a unified source, and create the largest publicly-available collection of machine-readable academic text to date. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances Example Paper Record: ``` { "id":"4cd223df721b722b1c40689caa52932a41fcc223", "title":"Knowledge-rich, computer-assisted composition of Chinese couplets", "paperAbstract":"Recent research effort in poem composition has focused on the use of automatic language generation...", "entities":[ ], "fieldsOfStudy":[ "Computer Science" ], "s2Url":"https://semanticscholar.org/paper/4cd223df721b722b1c40689caa52932a41fcc223", "pdfUrls":[ "https://doi.org/10.1093/llc/fqu052" ], "s2PdfUrl":"", "authors":[ { "name":"John Lee", "ids":[ "3362353" ] }, "..." ], "inCitations":[ "c789e333fdbb963883a0b5c96c648bf36b8cd242" ], "outCitations":[ "abe213ed63c426a089bdf4329597137751dbb3a0", "..." ], "year":2016, "venue":"DSH", "journalName":"DSH", "journalVolume":"31", "journalPages":"152-163", "sources":[ "DBLP" ], "doi":"10.1093/llc/fqu052", "doiUrl":"https://doi.org/10.1093/llc/fqu052", "pmid":"", "magId":"2050850752" } ``` ### Data Fields #### Identifier fields * `paper_id`: a `str`-valued field that is a unique identifier for each S2ORC paper. * `arxiv_id`: a `str`-valued field for papers on [arXiv.org](https://arxiv.org). * `acl_id`: a `str`-valued field for papers on [the ACL Anthology](https://www.aclweb.org/anthology/). * `pmc_id`: a `str`-valued field for papers on [PubMed Central](https://www.ncbi.nlm.nih.gov/pmc/articles). * `pubmed_id`: a `str`-valued field for papers on [PubMed](https://pubmed.ncbi.nlm.nih.gov/), which includes MEDLINE. Also known as `pmid` on PubMed. * `mag_id`: a `str`-valued field for papers on [Microsoft Academic](https://academic.microsoft.com). * `doi`: a `str`-valued field for the [DOI](http://doi.org/). Notably: * Resolved citation links are represented by the cited paper's `paper_id`. * The `paper_id` resolves to a Semantic Scholar paper page, which can be verified using the `s2_url` field. * We don't always have a value for every identifier field. When missing, they take `null` value. #### Metadata fields * `title`: a `str`-valued field for the paper title. Every S2ORC paper *must* have one, though the source can be from publishers or parsed from PDFs. We prioritize publisher-provided values over parsed values. * `authors`: a `List[Dict]`-valued field for the paper authors. Authors are listed in order. Each dictionary has the keys `first`, `middle`, `last`, and `suffix` for the author name, which are all `str`-valued with exception of `middle`, which is a `List[str]`-valued field. Every S2ORC paper *must* have at least one author. * `venue` and `journal`: `str`-valued fields for the published venue/journal. *Please note that there is not often agreement as to what constitutes a "venue" versus a "journal". Consolidating these fields is being considered for future releases.* * `year`: an `int`-valued field for the published year. If a paper is preprinted in 2019 but published in 2020, we try to ensure the `venue/journal` and `year` fields agree & prefer non-preprint published info. Missing years are replaced by -1. *We know this decision prohibits certain types of analysis like comparing preprint & published versions of a paper. We're looking into it for future releases.* * `abstract`: a `str`-valued field for the abstract. These are provided directly from gold sources (not parsed from PDFs). We preserve newline breaks in structured abstracts, which are common in medical papers, by denoting breaks with `':::'`. * `inbound_citations`: a `List[str]`-valued field containing `paper_id` of other S2ORC papers that cite the current paper. *Currently derived from PDF-parsed bibliographies, but may have gold sources in the future.* * `outbound_citations`: a `List[str]`-valued field containing `paper_id` of other S2ORC papers that the current paper cites. Same note as above. * `has_inbound_citations`: a `bool`-valued field that is `true` if `inbound_citations` has at least one entry, and `false` otherwise. * `has_outbound_citations` a `bool`-valued field that is `true` if `outbound_citations` has at least one entry, and `false` otherwise. We don't always have a value for every metadata field. When missing, `str` fields take `null` value, while `List` fields are empty lists. ### Data Splits There is no train/dev/test split given in the dataset ## Dataset Creation ### Curation Rationale Academic papers are an increasingly important textual domain for natural language processing (NLP) research. Aside from capturing valuable knowledge from humankind’s collective research efforts, academic papers exhibit many interesting characteristics – thousands of words organized into sections, objects such as tables, figures and equations, frequent inline references to these objects, footnotes, other papers, and more ### Source Data #### Initial Data Collection and Normalization To construct S2ORC, we must overcome challenges in (i) paper metadata aggregation, (ii) identifying open access publications, and (iii) clustering papers, in addition to identifying, extracting, and cleaning the full text and bibliometric annotations associated with each paper. The pipeline for creating S2ORC is: 1) Process PDFs and LATEX sources to derive metadata, clean full text, inline citations and references, and bibliography entries, 2) Select the best metadata and full text parses for each paper cluster, 3) Filter paper clusters with insufficient metadata or content, and 4) Resolve bibliography links between paper clusters in the corpus. #### Who are the source language producers? S2ORC is constructed using data from the Semantic Scholar literature corpus (Ammar et al., 2018). Papers in Semantic Scholar are derived from numerous sources: obtained directly from publishers, from resources such as MAG, from various archives such as arXiv or PubMed, or crawled from the open Internet. Semantic Scholar clusters these papers based on title similarity and DOI overlap, resulting in an initial set of approximately 200M paper clusters. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Semantic Scholar Open Research Corpus is licensed under ODC-BY. ### Citation Information ``` @misc{lo2020s2orc, title={S2ORC: The Semantic Scholar Open Research Corpus}, author={Kyle Lo and Lucy Lu Wang and Mark Neumann and Rodney Kinney and Dan S. Weld}, year={2020}, eprint={1911.02782}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
false
# Dataset Card for english-korean-multitarget-ted-talks-task ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/ ### Dataset Summary - Parallel English-Korean Text Corpus - Text was originally transcribed to English from various Ted Talks, then translated to Korean by TED translators - Approximately 166k train, 2k validation, and 2k test sentence pairs. ### Supported Tasks and Leaderboards - Machine Translation ### Languages - English - Korean ## Additional Information ### Dataset Curators Kevin Duh, "The Multitarget TED Talks Task", http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/, 2018 ### Licensing Information TED makes its collection available under the Creative Commons BY-NC-ND license. Please acknowledge TED when using this data. We acknowledge the authorship of TED Talks (BY condition). We are not redistributing the transcripts for commercial purposes (NC condition) nor making derivative works of the original contents (ND condition). ### Citation Information @misc{duh18multitarget, author = {Kevin Duh}, title = {The Multitarget TED Talks Task}, howpublished = {\url{http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/}}, year = {2018}, }
false
This dataset contains texts in Tajik language with sentence annotations. It can be used to train and evaluate sentence-wise text segmentation algorithms. The dataset contains more than 100 short and long texts and more than 3000 annotated sentences. The texts were carefully selected from different catergories such as news, articles, novels, classical texts, poetry, and religious texts. It deliberately contains more of "hard" passages where splitting them by period "." characters would result in bad segmentation. No preprocessing is done except reducing consecutive whitespaces and linebreaks to singles. The texts are sometimes poorly formatted just as they are copied and pasted from the web. This could make the training algorithm robust to noises.
false
# Dataset Card for mlsum-it ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [https://huggingface.co/datasets/mlsum] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The MLSum-it dataset is the translated version (Helsinki-NLP/opus-mt-es-it) of the spanish portion of MLSum, containing news articles taken from BBC/mundo. More informations on the official dataset page [HuggingFace page](https://huggingface.co/datasets/mlsum). There are two features: - source: Input news article. - target: Summary of the article. ### Supported Tasks and Leaderboards - `abstractive-summarization`, `summarization` ### Languages The text in the dataset is in Italian ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228) ``` @Article{info13050228, AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo}, TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization}, JOURNAL = {Information}, VOLUME = {13}, YEAR = {2022}, NUMBER = {5}, ARTICLE-NUMBER = {228}, URL = {https://www.mdpi.com/2078-2489/13/5/228}, ISSN = {2078-2489}, ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.}, DOI = {10.3390/info13050228} } ```
true
# Dataset Card for BLURB ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://microsoft.github.io/BLURB/index.html - **Paper:** [Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing](https://arxiv.org/pdf/2007.15779.pdf) - **Leaderboard:** https://microsoft.github.io/BLURB/leaderboard.html - **Point of Contact:** ### Dataset Summary BLURB is a collection of resources for biomedical natural language processing. In general domains, such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models, such as BERT provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks. Inspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact. #### BC5-chem The corpus consists of three separate sets of articles with diseases, chemicals and their relations annotated. The training (500 articles) and development (500 articles) sets were released to task participants in advance to support text-mining method development. The test set (500 articles) was used for final system performance evaluation. - **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/) #### BC5-disease The corpus consists of three separate sets of articles with diseases, chemicals and their relations annotated. The training (500 articles) and development (500 articles) sets were released to task participants in advance to support text-mining method development. The test set (500 articles) was used for final system performance evaluation. - **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/) #### BC2GM The BioCreative II Gene Mention task. The training corpus for the current task consists mainly of the training and testing corpora (text collections) from the BCI task, and the testing corpus for the current task consists of an additional 5,000 sentences that were held 'in reserve' from the previous task. In the current corpus, tokenization is not provided; instead participants are asked to identify a gene mention in a sentence by giving its start and end characters. As before, the training set consists of a set of sentences, and for each sentence a set of gene mentions (GENE annotations). - **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-ii/task-1a-gene-mention-tagging/ - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper:** [verview of BioCreative II gene mention recognition](https://link.springer.com/article/10.1186/gb-2008-9-s2-s2) #### NCBI Disease The NCBI disease corpus is fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Corpus Characteristics ---------------------- * 793 PubMed abstracts * 6,892 disease mentions * 790 unique disease concepts * Medical Subject Headings (MeSH®) * Online Mendelian Inheritance in Man (OMIM®) * 91% of the mentions map to a single disease concept **divided into training, developing and testing sets. Corpus Annotation * Fourteen annotators * Two-annotators per document (randomly paired) * Three annotation phases * Checked for corpus-wide consistency of annotations - **Homepage:** https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/ - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper:** [NCBI disease corpus: a resource for disease name recognition and concept normalization](https://pubmed.ncbi.nlm.nih.gov/24393765/) #### JNLPBA The BioNLP / JNLPBA Shared Task 2004 involves the identification and classification of technical terms referring to concepts of interest to biologists in the domain of molecular biology. The task was organized by GENIA Project based on the annotations of the GENIA Term corpus (version 3.02). Corpus format: The JNLPBA corpus is distributed in IOB format, with each line containing a single token and its tag, separated by a tab character. Sentences are separated by blank lines. - **Homepage: ** http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004 - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper: ** [Introduction to the Bio-entity Recognition Task at JNLPBA](https://aclanthology.org/W04-1213) #### EBM PICO - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** #### ChemProt - **Homepage:** - **Repository:** - **Paper:** #### DDI - **Homepage:** - **Repository:** - **Paper:** #### GAD - **Homepage:** - **Repository:** - **Paper:** #### BIOSSES BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article. The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows: - very strong: 0.80–1.00 - strong: 0.60–0.79 - moderate: 0.40–0.59 - weak: 0.20–0.39 - very weak: 0.00–0.19 - **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html - **Repository:** https://github.com/gizemsogancioglu/biosses - **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954) - **Point of Contact:** [Gizem Soğancıoğlu](gizemsogancioglu@gmail.com) and [Arzucan Özgür](gizemsogancioglu@gmail.com) #### HoC - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** #### PubMedQA We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at this https URL. - **Homepage:** https://pubmedqa.github.io/ - **Repository:** https://github.com/pubmedqa/pubmedqa - **Paper:** [PubMedQA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/pdf/1909.06146.pdf) - **Leaderboard:** [Question answering](https://pubmedqa.github.io/) - **Point of Contact:** #### BioASQ Task 7b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). 2747 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries). - **Homepage:** http://bioasq.org/ - **Repository:** http://participants-area.bioasq.org/datasets/ - **Paper:** [Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783?login=false) ### Supported Tasks and Leaderboards | **Dataset** | **Task** | **Train** | **Dev** | **Test** | **Evaluation Metrics** | **Added** | |:------------:|:-----------------------:|:---------:|:-------:|:--------:|:----------------------:|-----------| | BC5-chem | NER | 5203 | 5347 | 5385 | F1 entity-level | **Yes** | | BC5-disease | NER | 4182 | 4244 | 4424 | F1 entity-level | **Yes** | | NCBI-disease | NER | 5134 | 787 | 960 | F1 entity-level | **Yes** | | BC2GM | NER | 15197 | 3061 | 6325 | F1 entity-level | **Yes** | | JNLPBA | NER | 46750 | 4551 | 8662 | F1 entity-level | **Yes** | | EBM PICO | PICO | 339167 | 85321 | 16364 | Macro F1 word-level | No | | ChemProt | Relation Extraction | 18035 | 11268 | 15745 | Micro F1 | No | | DDI | Relation Extraction | 25296 | 2496 | 5716 | Micro F1 | No | | GAD | Relation Extraction | 4261 | 535 | 534 | Micro F1 | No | | BIOSSES | Sentence Similarity | 64 | 16 | 20 | Pearson | **Yes** | | HoC | Document Classification | 1295 | 186 | 371 | Average Micro F1 | No | | PubMedQA | Question Answering | 450 | 50 | 500 | Accuracy | **Yes** | | BioASQ | Question Answering | 670 | 75 | 140 | Accuracy | No | Datasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB. This is something to be checked. ### Languages English from biomedical texts ## Dataset Structure ### Data Instances * **NER** ```json { 'id': 0, 'tokens': [ "DPP6", "as", "a", "candidate", "gene", "for", "neuroleptic", "-", "induced", "tardive", "dyskinesia", "." ] 'ner_tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] } ``` * **PICO** ```json { 'TBD' } ``` * **Relation Extraction** ```json { 'TBD' } ``` * **Sentence Similarity** ```json {'sentence 1': 'Here, looking for agents that could specifically kill KRAS mutant cells, they found that knockdown of GATA2 was synthetically lethal with KRAS mutation' 'sentence 2': 'Not surprisingly, GATA2 knockdown in KRAS mutant cells resulted in a striking reduction of active GTP-bound RHO proteins, including the downstream ROCK kinase' 'score': 2.2} ``` * **Document Classification** ```json { 'TBD' } ``` * **Question Answering** * PubMedQA ```json {'context': {'contexts': ['Programmed cell death (PCD) is the regulated death of cells within an organism. The lace plant (Aponogeton madagascariensis) produces perforations in its leaves through PCD. The leaves of the plant consist of a latticework of longitudinal and transverse veins enclosing areoles. PCD occurs in the cells at the center of these areoles and progresses outwards, stopping approximately five cells from the vasculature. The role of mitochondria during PCD has been recognized in animals; however, it has been less studied during PCD in plants.', 'The following paper elucidates the role of mitochondrial dynamics during developmentally regulated PCD in vivo in A. madagascariensis. A single areole within a window stage leaf (PCD is occurring) was divided into three areas based on the progression of PCD; cells that will not undergo PCD (NPCD), cells in early stages of PCD (EPCD), and cells in late stages of PCD (LPCD). Window stage leaves were stained with the mitochondrial dye MitoTracker Red CMXRos and examined. Mitochondrial dynamics were delineated into four categories (M1-M4) based on characteristics including distribution, motility, and membrane potential (ΔΨm). A TUNEL assay showed fragmented nDNA in a gradient over these mitochondrial stages. Chloroplasts and transvacuolar strands were also examined using live cell imaging. The possible importance of mitochondrial permeability transition pore (PTP) formation during PCD was indirectly examined via in vivo cyclosporine A (CsA) treatment. This treatment resulted in lace plant leaves with a significantly lower number of perforations compared to controls, and that displayed mitochondrial dynamics similar to that of non-PCD cells.'], 'labels': ['BACKGROUND', 'RESULTS'], 'meshes': ['Alismataceae', 'Apoptosis', 'Cell Differentiation', 'Mitochondria', 'Plant Leaves'], 'reasoning_free_pred': ['y', 'e', 's'], 'reasoning_required_pred': ['y', 'e', 's']}, 'final_decision': 'yes', 'long_answer': 'Results depicted mitochondrial dynamics in vivo as PCD progresses within the lace plant, and highlight the correlation of this organelle with other organelles during developmental PCD. To the best of our knowledge, this is the first report of mitochondria and chloroplasts moving on transvacuolar strands to form a ring structure surrounding the nucleus during developmental PCD. Also, for the first time, we have shown the feasibility for the use of CsA in a whole plant system. Overall, our findings implicate the mitochondria as playing a critical and early role in developmentally regulated PCD in the lace plant.', 'pubid': 21645374, 'question': 'Do mitochondria play a role in remodelling lace plant leaves during programmed cell death?'} ``` ### Data Fields * **NER** * `id`: string * `ner_tags`: Sequence[ClassLabel] * `tokens`: Sequence[String] * **PICO** * To be added * **Relation Extraction** * To be added * **Sentence Similarity** * `sentence 1`: string * `sentence 2`: string * `score`: float ranging from 0 (no relation) to 4 (equivalent) * **Document Classification** * To be added * **Question Answering** * PubMedQA * `pubid`: integer * `question`: string * `context`: sequence of strings [`contexts`, `labels`, `meshes`, `reasoning_required_pred`, `reasoning_free_pred`] * `long_answer`: string * `final_decision`: string ### Data Splits Shown in the table of supported tasks. ## Dataset Creation ### Curation Rationale * BC5-chem * BC5-disease * BC2GM * JNLPBA * EBM PICO * ChemProt * DDI * GAD * BIOSSES * HoC * PubMedQA * BioASQ ### Source Data [More Information Needed] ### Annotations All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details. #### Annotation process * BC5-chem * BC5-disease * BC2GM * JNLPBA * EBM PICO * ChemProt * DDI * GAD * BIOSSES - The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees. * HoC * PubMedQA * BioASQ ### Dataset Curators All the datasets have been obtained and annotated by experts in thebiomedical domain. Check the different citations for further details. ### Licensing Information * BC5-chem * BC5-disease * BC2GM * JNLPBA * EBM PICO * ChemProt * DDI * GAD * BIOSSES - BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html). * HoC * PubMedQA - MIT License Copyright (c) 2019 pubmedqa * BioASQ ### Citation Information * BC5-chem & BC5-disease ```latex @article{article, author = {Li, Jiao and Sun, Yueping and Johnson, Robin and Sciaky, Daniela and Wei, Chih-Hsuan and Leaman, Robert and Davis, Allan Peter and Mattingly, Carolyn and Wiegers, Thomas and lu, Zhiyong}, year = {2016}, month = {05}, pages = {baw068}, title = {BioCreative V CDR task corpus: a resource for chemical disease relation extraction}, volume = {2016}, journal = {Database}, doi = {10.1093/database/baw068} } ``` * BC2GM ```latex @article{article, author = {Smith, Larry and Tanabe, Lorraine and Ando, Rie and Kuo, Cheng-Ju and Chung, I-Fang and Hsu, Chun-Nan and Lin, Yu-Shi and Klinger, Roman and Friedrich, Christoph and Ganchev, Kuzman and Torii, Manabu and Liu, Hongfang and Haddow, Barry and Struble, Craig and Povinelli, Richard and Vlachos, Andreas and Baumgartner Jr, William and Hunter, Lawrence and Carpenter, Bob and Wilbur, W.}, year = {2008}, month = {09}, pages = {S2}, title = {Overview of BioCreative II gene mention recognition}, volume = {9 Suppl 2}, journal = {Genome biology}, doi = {10.1186/gb-2008-9-s2-s2} } ``` * JNLPBA ```latex @inproceedings{collier-kim-2004-introduction, title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}", author = "Collier, Nigel and Kim, Jin-Dong", booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})", month = aug # " 28th and 29th", year = "2004", address = "Geneva, Switzerland", publisher = "COLING", url = "https://aclanthology.org/W04-1213", pages = "73--78", } ``` * NCBI Disiease ```latex @article{10.5555/2772763.2772800, author = {Dogan, Rezarta Islamaj and Leaman, Robert and Lu, Zhiyong}, title = {NCBI Disease Corpus}, year = {2014}, issue_date = {February 2014}, publisher = {Elsevier Science}, address = {San Diego, CA, USA}, volume = {47}, number = {C}, issn = {1532-0464}, abstract = {Graphical abstractDisplay Omitted NCBI disease corpus is built as a gold-standard resource for disease recognition.793 PubMed abstracts are annotated with disease mentions and concepts (MeSH/OMIM).14 Annotators produced high consistency level and inter-annotator agreement.Normalization benchmark results demonstrate the utility of the corpus.The corpus is publicly available to the community. Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora.This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH ) or Online Mendelian Inheritance in Man (OMIM ). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency.The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks.The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/.}, journal = {J. of Biomedical Informatics}, month = {feb}, pages = {1–10}, numpages = {10}} ``` * EBM PICO * ChemProt * DDI * GAD * BIOSSES ```latex @article{souganciouglu2017biosses, title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}, author={So{\u{g}}anc{\i}o{\u{g}}lu, Gizem and {\"O}zt{\"u}rk, Hakime and {\"O}zg{\"u}r, Arzucan}, journal={Bioinformatics}, volume={33}, number={14}, pages={i49--i58}, year={2017}, publisher={Oxford University Press} } ``` * HoC * PubMedQA ```latex @inproceedings{jin2019pubmedqa, title={PubMedQA: A Dataset for Biomedical Research Question Answering}, author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua}, booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)}, pages={2567--2577}, year={2019} } ``` * BioASQ ```latex @article{10.1093/bioinformatics/btv585, author = {Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and Högberg, Johan and Stenius, Ulla and Korhonen, Anna}, title = "{Automatic semantic classification of scientific literature according to the hallmarks of cancer}", journal = {Bioinformatics}, volume = {32}, number = {3}, pages = {432-440}, year = {2015}, month = {10}, abstract = "{Motivation: The hallmarks of cancer have become highly influential in cancer research. They reduce the complexity of cancer into 10 principles (e.g. resisting cell death and sustaining proliferative signaling) that explain the biological capabilities acquired during the development of human tumors. Since new research depends crucially on existing knowledge, technology for semantic classification of scientific literature according to the hallmarks of cancer could greatly support literature review, knowledge discovery and applications in cancer research.Results: We present the first step toward the development of such technology. We introduce a corpus of 1499 PubMed abstracts annotated according to the scientific evidence they provide for the 10 currently known hallmarks of cancer. We use this corpus to train a system that classifies PubMed literature according to the hallmarks. The system uses supervised machine learning and rich features largely based on biomedical text mining. We report good performance in both intrinsic and extrinsic evaluations, demonstrating both the accuracy of the methodology and its potential in supporting practical cancer research. We discuss how this approach could be developed and applied further in the future.Availability and implementation: The corpus of hallmark-annotated PubMed abstracts and the software for classification are available at: http://www.cl.cam.ac.uk/∼sb895/HoC.html .Contact:simon.baker@cl.cam.ac.uk}", issn = {1367-4803}, doi = {10.1093/bioinformatics/btv585}, url = {https://doi.org/10.1093/bioinformatics/btv585}, eprint = {https://academic.oup.com/bioinformatics/article-pdf/32/3/432/19568147/btv585.pdf}, } ``` ### Contributions * This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente. * Thanks to [@GamalC](https://github.com/GamalC) for uploading the NER datasets to GitHub, from where I got them. * I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP. * Thanks to [@bwang482](https://github.com/bwang482) for uploading the [BIOSSES dataset](https://github.com/bwang482/datasets/tree/master/datasets/biosses). We forked the [BIOSSES 🤗 dataset](https://huggingface.co/datasets/biosses) to add it to this BLURB benchmark. * Thank you to [@tuner007](https://github.com/tuner007) for adding this dataset to the 🤗 hub
false
# Model-Written Evaluation Datasets This repository includes datasets written by language models, used in our paper on "Discovering Language Model Behaviors with Model-Written Evaluations." We intend the datasets to be useful to: 1. Those who are interested in understanding the quality and properties of model-generated data 2. Those who wish to use our datasets to evaluate other models for the behaviors we examined in our work (e.g., related to model persona, sycophancy, advanced AI risks, and gender bias) The evaluations were generated to be asked to dialogue agents (e.g., a model finetuned explicitly respond to a user's utterances, or a pretrained language model prompted to behave like a dialogue agent). However, it is possible to adapt the data to test other kinds of models as well. We describe each of our collections of datasets below: 1. `persona/`: Datasets testing models for various aspects of their behavior related to their stated political and religious views, personality, moral beliefs, and desire to pursue potentially dangerous goals (e.g., self-preservation or power-seeking). 2. `sycophancy/`: Datasets testing models for whether or not they repeat back a user's view to various questions (in philosophy, NLP research, and politics) 3. `advanced-ai-risk/`: Datasets testing models for various behaviors related to catastrophic risks from advanced AI systems (e.g., ). These datasets were generated in a few-shot manner. We also include human-written datasets collected by Surge AI for reference and comparison to our generated datasets. 4. `winogenerated/`: Our larger, model-generated version of the Winogender Dataset ([Rudinger et al., 2018](https://arxiv.org/abs/1804.09301)). We also include the names of occupation titles that we generated, to create the dataset (alongside occupation gender statistics from the Bureau of Labor Statistics) Please see our paper for additional details on the datasets, how we generated them, human validation metrics, and other analyses of the datasets. **Disclaimer**: As discussed in our paper, some data contains content that includes social biases and stereotypes. The data may also contain other forms of harmful or offensive content. The views expressed in the data do not reflect the views of Anthropic or any of its employees. ## Contact For questions, please email `ethan at anthropic dot com` ## Bibtex Citation If you would like to cite our work or data, you may use the following bibtex citation: ``` @misc{perez2022discovering, doi = {10.48550/ARXIV.2212.09251}, url = {https://arxiv.org/abs/2212.09251}, author = {Perez, Ethan and Ringer, Sam and Lukošiūtė, Kamilė and Nguyen, Karina and Chen, Edwin and Heiner, Scott and Pettit, Craig and Olsson, Catherine and Kundu, Sandipan and Kadavath, Saurav and Jones, Andy and Chen, Anna and Mann, Ben and Israel, Brian and Seethor, Bryan and McKinnon, Cameron and Olah, Christopher and Yan, Da and Amodei, Daniela and Amodei, Dario and Drain, Dawn and Li, Dustin and Tran-Johnson, Eli and Khundadze, Guro and Kernion, Jackson and Landis, James and Kerr, Jamie and Mueller, Jared and Hyun, Jeeyoon and Landau, Joshua and Ndousse, Kamal and Goldberg, Landon and Lovitt, Liane and Lucas, Martin and Sellitto, Michael and Zhang, Miranda and Kingsland, Neerav and Elhage, Nelson and Joseph, Nicholas and Mercado, Noemí and DasSarma, Nova and Rausch, Oliver and Larson, Robin and McCandlish, Sam and Johnston, Scott and Kravec, Shauna and {El Showk}, Sheer and Lanham, Tamera and Telleen-Lawton, Timothy and Brown, Tom and Henighan, Tom and Hume, Tristan and Bai, Yuntao and Hatfield-Dodds, Zac and Clark, Jack and Bowman, Samuel R. and Askell, Amanda and Grosse, Roger and Hernandez, Danny and Ganguli, Deep and Hubinger, Evan and Schiefer, Nicholas and Kaplan, Jared}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Discovering Language Model Behaviors with Model-Written Evaluations}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
false
![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) # Dataset Card for germandpr ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://deepset.ai/germanquad - **Repository:** https://github.com/deepset-ai/haystack - **Paper:** https://arxiv.org/abs/2104.12741 ### Dataset Summary We take GermanQuAD as a starting point and add hard negatives from a dump of the full German Wikipedia following the approach of the DPR authors (Karpukhin et al., 2020). The format of the dataset also resembles the one of DPR. GermanDPR comprises 9275 question/answerpairs in the training set and 1025 pairs in the test set. For eachpair, there are one positive context and three hard negative contexts. ### Supported Tasks and Leaderboards - `open-domain-qa`, `text-retrieval`: This dataset is intended to be used for `open-domain-qa` and text retrieval tasks. ### Languages The sentences in the dataset are in German (de). ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { "question": "Wie viele christlichen Menschen in Deutschland glauben an einen Gott?", "answers": [ "75 % der befragten Katholiken sowie 67 % der Protestanten glaubten an einen Gott (2005: 85 % und 79 %)" ], "positive_ctxs": [ { "title": "Gott", "text": "Gott\ === Demografie === Eine Zusammenfassung von Umfrageergebnissen aus verschiedenen Staaten ergab im Jahr 2007, dass es weltweit zwischen 505 und 749 Millionen Atheisten und Agnostiker gibt. Laut der Encyclopædia Britannica gab es 2009 weltweit 640 Mio. Nichtreligiöse und Agnostiker (9,4 %), und weitere 139 Mio. Atheisten (2,0 %), hauptsächlich in der Volksrepublik China.\\\\\\\\ Bei einer Eurobarometer-Umfrage im Jahr 2005 wurde festgestellt, dass 52 % der damaligen EU-Bevölkerung glaubt, dass es einen Gott gibt. Eine vagere Frage nach dem Glauben an „eine andere spirituelle Kraft oder Lebenskraft“ wurde von weiteren 27 % positiv beantwortet. Bezüglich der Gottgläubigkeit bestanden große Unterschiede zwischen den einzelnen europäischen Staaten. Die Umfrage ergab, dass der Glaube an Gott in Staaten mit starkem kirchlichen Einfluss am stärksten verbreitet ist, dass mehr Frauen (58 %) als Männer (45 %) an einen Gott glauben und dass der Gottglaube mit höherem Alter, geringerer Bildung und politisch rechtsgerichteten Ansichten korreliert.\\\\\\\\ Laut einer Befragung von 1003 Personen in Deutschland im März 2019 glauben 55 % an einen Gott; 2005 waren es 66 % gewesen. 75 % der befragten Katholiken sowie 67 % der Protestanten glaubten an einen Gott (2005: 85 % und 79 %). Unter Konfessionslosen ging die Glaubensquote von 28 auf 20 % zurück. Unter Frauen (60 %) war der Glauben 2019 stärker ausgeprägt als unter Männern (50 %), in Westdeutschland (63 %) weiter verbreitet als in Ostdeutschland (26 %).", "passage_id": "" } ], "negative_ctxs": [], "hard_negative_ctxs": [ { "title": "Christentum", "text": "Christentum\ \ === Ursprung und Einflüsse ===\ Die ersten Christen waren Juden, die zum Glauben an Jesus Christus fanden. In ihm erkannten sie den bereits durch die biblische Prophetie verheißenen Messias (hebräisch: ''maschiach'', griechisch: ''Christos'', latinisiert ''Christus''), auf dessen Kommen die Juden bis heute warten. Die Urchristen übernahmen aus der jüdischen Tradition sämtliche heiligen Schriften (den Tanach), wie auch den Glauben an einen Messias oder Christus (''christos'': Gesalbter). Von den Juden übernommen wurden die Art der Gottesverehrung, das Gebet der Psalmen u. v. a. m. Eine weitere Gemeinsamkeit mit dem Judentum besteht in der Anbetung desselben Schöpfergottes. Jedoch sehen fast alle Christen Gott als ''einen'' dreieinigen Gott an: den Vater, den Sohn (Christus) und den Heiligen Geist. Darüber, wie der dreieinige Gott konkret gedacht werden kann, gibt es unter den christlichen Konfessionen und Gruppierungen unterschiedliche Auffassungen bis hin zur Ablehnung der Dreieinigkeit Gottes (Antitrinitarier). Der Glaube an Jesus Christus führte zu Spannungen und schließlich zur Trennung zwischen Juden, die diesen Glauben annahmen, und Juden, die dies nicht taten, da diese es unter anderem ablehnten, einen Menschen anzubeten, denn sie sahen in Jesus Christus nicht den verheißenen Messias und erst recht nicht den Sohn Gottes. Die heutige Zeitrechnung wird von der Geburt Christi aus gezählt. Anno Domini (A. D.) bedeutet „im Jahr des Herrn“.", "passage_id": "" }, { "title": "Noachidische_Gebote", "text": "Noachidische_Gebote\ \ === Die kommende Welt ===\ Der Glaube an eine ''Kommende Welt'' (Olam Haba) bzw. an eine ''Welt des ewigen Lebens'' ist ein Grundprinzip des Judentums. Dieser jüdische Glaube ist von dem christlichen Glauben an das ''Ewige Leben'' fundamental unterschieden. Die jüdische Lehre spricht niemandem das Heil dieser kommenden Welt ab, droht aber auch nicht mit Höllenstrafen im Jenseits. Juden glauben schlicht, dass allen Menschen ein Anteil der kommenden Welt zuteilwerden kann. Es gibt zwar viele Vorstellungen der kommenden Welt, aber keine kanonische Festlegung ihrer Beschaffenheit; d. h., das Judentum kennt keine eindeutige Antwort darauf, was nach dem Tod mit uns geschieht. Die Frage nach dem Leben nach dem Tod wird auch als weniger wesentlich angesehen, als Fragen, die das Leben des Menschen auf Erden und in der Gesellschaft betreffen.\ Der jüdische Glaube an eine kommende Welt bedeutet nicht, dass Menschen, die nie von der Tora gehört haben, böse oder sonst minderwertige Menschen sind. Das Judentum lehrt den Glauben, dass alle Menschen mit Gott verbunden sind. Es gibt im Judentum daher keinen Grund, zu missionieren. Das Judentum lehrt auch, dass alle Menschen sich darin gleichen, dass sie weder prinzipiell gut noch böse sind, sondern eine Neigung zum Guten wie zum Bösen haben. Während des irdischen Lebens sollte sich der Mensch immer wieder für das Gute entscheiden.", "passage_id": "" }, { "title": "Figuren_und_Schauplätze_der_Scheibenwelt-Romane", "text": "Figuren_und_Schauplätze_der_Scheibenwelt-Romane\ \ === Herkunft ===\ Es gibt unzählig viele Götter auf der Scheibenwelt, die so genannten „geringen Götter“, die überall sind, aber keine Macht haben. Erst wenn sie durch irgendein Ereignis Gläubige gewinnen, werden sie mächtiger. Je mehr Glauben, desto mehr Macht. Dabei nehmen sie die Gestalt an, die die Menschen ihnen geben (zum Beispiel Offler). Wenn ein Gott mächtig genug ist, erhält er Einlass in den Cori Celesti, den Berg der Götter, der sich in der Mitte der Scheibenwelt erhebt. Da Menschen wankelmütig sind, kann es auch geschehen, dass sie den Glauben verlieren und einen Gott damit entmachten (s. „Einfach Göttlich“).", "passage_id": "" } ] }, ``` ### Data Fields - `positive_ctxs`: a dictionary feature containing: - `title`: a `string` feature. - `text`: a `string` feature. - `passage_id`: a `string` feature. - `negative_ctxs`: a dictionary feature containing: - `title`: a `string` feature. - `text`: a `string` feature. - `passage_id`: a `string` feature. - `hard_negative_ctxs`: a dictionary feature containing: - `title`: a `string` feature. - `text`: a `string` feature. - `passage_id`: a `string` feature. - `question`: a `string` feature. - `answers`: a list feature containing: - a `string` feature. ### Data Splits The dataset is split into a training set and a test set. The final GermanDPR dataset comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set. For each pair, there are one positive context and three hard negative contexts. | |questions|answers|positive contexts|hard negative contexts| |------|--------:|------:|----------------:|---------------------:| |train|9275| 9275|9275|27825| |test|1025| 1025|1025|3075| ## Additional Information ### Dataset Curators The dataset was initially created by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at deepset.ai ### Citation Information ``` @misc{möller2021germanquad, title={GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval}, author={Timo Möller and Julian Risch and Malte Pietsch}, year={2021}, eprint={2104.12741}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
false
# Dataset Card for Multilingual Spoken Words ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://mlcommons.org/en/multilingual-spoken-words/ - **Repository:** https://github.com/harvard-edge/multilingual_kws - **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages collectively spoken by over 5 billion people, for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset has many use cases, ranging from voice-enabled consumer devices to call center automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level audio to produce per-word timing estimates for extraction. All alignments are included in the dataset. Data is provided in two formats: `wav` (16KHz) and `opus` (48KHz). Default configurations look like `"{lang}_{format}"`, so to load, for example, Tatar in wav format do: ```python ds = load_dataset("MLCommons/ml_spoken_words", "tt_wav") ``` To download multiple languages in a single dataset pass list of languages to `languages` argument: ```python ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"]) ``` To download a specific format pass it to the `format` argument (default format is `wav`): ```python ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"], format="opus") ``` Note that each time you provide different sets of languages, examples are generated from scratch even if you already provided one or several of them before because custom configurations are created each time (the data is **not** redownloaded though). ### Supported Tasks and Leaderboards Keyword spotting, Spoken term search ### Languages The dataset is multilingual. To specify several languages to download pass a list of them to the `languages` argument: ```python ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"]) ``` The dataset contains data for the following languages: Low-resourced (<10 hours): * Arabic (0.1G, 7.6h) * Assamese (0.9M, 0.1h) * Breton (69M, 5.6h) * Chuvash (28M, 2.1h) * Chinese (zh-CN) (42M, 3.1h) * Dhivehi (0.7M, 0.04h) * Frisian (0.1G, 9.6h) * Georgian (20M, 1.4h) * Guarani (0.7M, 1.3h) * Greek (84M, 6.7h) * Hakha Chin (26M, 0.1h) * Hausa (90M, 1.0h) * Interlingua (58M, 4.0h) * Irish (38M, 3.2h) * Latvian (51M, 4.2h) * Lithuanian (21M, 0.46h) * Maltese (88M, 7.3h) * Oriya (0.7M, 0.1h) * Romanian (59M, 4.5h) * Sakha (42M, 3.3h) * Slovenian (43M, 3.0h) * Slovak (31M, 1.9h) * Sursilvan (61M, 4.8h) * Tamil (8.8M, 0.6h) * Vallader (14M, 1.2h) * Vietnamese (1.2M, 0.1h) Medium-resourced (>10 & <100 hours): * Czech (0.3G, 24h) * Dutch (0.8G, 70h) * Estonian (0.2G, 19h) * Esperanto (1.3G, 77h) * Indonesian (0.1G, 11h) * Kyrgyz (0.1G, 12h) * Mongolian (0.1G, 12h) * Portuguese (0.7G, 58h) * Swedish (0.1G, 12h) * Tatar (4G, 30h) * Turkish (1.3G, 29h) * Ukrainian (0.2G, 18h) Hig-resourced (>100 hours): * Basque (1.7G, 118h) * Catalan (8.7G, 615h) * English (26G, 1957h) * French (9.3G, 754h) * German (14G, 1083h) * Italian (2.2G, 155h) * Kinyarwanda (6.1G, 422h) * Persian (4.5G, 327h) * Polish (1.8G, 130h) * Russian (2.1G, 137h) * Spanish (4.9G, 349h) * Welsh (4.5G, 108h) ## Dataset Structure ### Data Instances ```python {'file': 'абзар_common_voice_tt_17737010.opus', 'is_valid': True, 'language': 0, 'speaker_id': '687025afd5ce033048472754c8d2cb1cf8a617e469866bbdb3746e2bb2194202094a715906f91feb1c546893a5d835347f4869e7def2e360ace6616fb4340e38', 'gender': 0, 'keyword': 'абзар', 'audio': {'path': 'абзар_common_voice_tt_17737010.opus', 'array': array([2.03458695e-34, 2.03458695e-34, 2.03458695e-34, ..., 2.03458695e-34, 2.03458695e-34, 2.03458695e-34]), 'sampling_rate': 48000}} ``` ### Data Fields * file: strinrelative audio path inside the archive * is_valid: if a sample is valid * language: language of an instance. Makes sense only when providing multiple languages to the dataset loader (for example, `load_dataset("ml_spoken_words", languages=["ar", "tt"])`) * speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid * gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]` * keyword: word spoken in a current sample * audio: a dictionary containing the relative path to the audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]` ### Data Splits The data for each language is splitted into train / validation / test parts. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data comes form Common Voice dataset. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information he dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic research and commercial applications in keyword spotting and spoken term search. ### Citation Information ``` @inproceedings{mazumder2021multilingual, title={Multilingual Spoken Words Corpus}, author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021} } ``` ### Contributions Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
false
# Dataset Card for Flores200 ## Table of Contents - [Dataset Card for Flores200](#dataset-card-for-flores200) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Home:** [Flores](https://github.com/facebookresearch/flores) - **Repository:** [Github](https://github.com/facebookresearch/flores) ### Dataset Summary FLORES is a benchmark dataset for machine translation between English and low-resource languages. >The creation of FLORES200 doubles the existing language coverage of FLORES-101. Given the nature of the new languages, which have less standardization and require more specialized professional translations, the verification process became more complex. This required modifications to the translation workflow. FLORES-200 has several languages which were not translated from English. Specifically, several languages were translated from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also includes two script alternatives for four languages. FLORES-200 consists of translations from 842 distinct web articles, totaling 3001 sentences. These sentences are divided into three splits: dev, devtest, and test (hidden). On average, sentences are approximately 21 words long. **Disclaimer**: *The Flores200 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ### Supported Tasks and Leaderboards #### Multilingual Machine Translation Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html). Flores 200 is an extention of this. ### Languages The dataset contains parallel sentences for 200 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) plus an additional code describing the script (e.g., "eng_Latn", "ukr_Cyrl"). See [the webpage for code descriptions](https://github.com/facebookresearch/flores/blob/main/flores200/README.md). Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command. Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng_Latn-ukr_Cyrl" will provide sentences in the format below). ## Dataset Structure ### Data Instances A sample from the `dev` split for the Russian language (`ukr_Cyrl` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits. ```python { 'id': 1, 'sentence': 'У понеділок, науковці зі Школи медицини Стенфордського університету оголосили про винайдення нового діагностичного інструменту, що може сортувати клітини за їх видами: це малесенький друкований чіп, який можна виготовити за допомогою стандартних променевих принтерів десь по одному центу США за штуку.', 'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet', 'domain': 'wikinews', 'topic': 'health', 'has_image': 0, 'has_hyperlink': 0 } ``` When using a hyphenated pairing or using the `all` function, data will be presented as follows: ```python { 'id': 1, 'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet', 'domain': 'wikinews', 'topic': 'health', 'has_image': 0, 'has_hyperlink': 0, 'sentence_eng_Latn': 'On Monday, scientists from the Stanford University School of Medicine announced the invention of a new diagnostic tool that can sort cells by type: a tiny printable chip that can be manufactured using standard inkjet printers for possibly about one U.S. cent each.', 'sentence_ukr_Cyrl': 'У понеділок, науковці зі Школи медицини Стенфордського університету оголосили про винайдення нового діагностичного інструменту, що може сортувати клітини за їх видами: це малесенький друкований чіп, який можна виготовити за допомогою стандартних променевих принтерів десь по одному центу США за штуку.' } ``` The text is provided as-in the original dataset, without further preprocessing or tokenization. ### Data Fields - `id`: Row number for the data entry, starting at 1. - `sentence`: The full sentence in the specific language (may have _lang for pairings) - `URL`: The URL for the English article from which the sentence was extracted. - `domain`: The domain of the sentence. - `topic`: The topic of the sentence. - `has_image`: Whether the original article contains an image. - `has_hyperlink`: Whether the sentence contains a hyperlink. ### Data Splits | config| `dev`| `devtest`| |-----------------:|-----:|---------:| |all configurations| 997| 1012:| ### Dataset Creation Please refer to the original article [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) for additional information on dataset creation. ## Additional Information ### Dataset Curators See paper for details. ### Licensing Information Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information Please cite the authors if you use these corpora in your work: ```bibtex @article{nllb2022, author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang}, title = {No Language Left Behind: Scaling Human-Centered Machine Translation}, year = {2022} } ```
true
# Dataset Card for ArRestReviews ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces) - **Repository:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces) - **Paper:** [ Building Large Arabic Multi-domain Resources for Sentiment Analysis](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces/blob/master/Paper%20-%20Building%20Large%20Arabic%20Multi-domain%20Resources%20for%20Sentiment%20Analysis.pdf) - **Point of Contact:** [hady elsahar](hadyelsahar@gmail.com) ### Dataset Summary Dataset of 8364 restaurant reviews from qaym.com in Arabic for sentiment analysis ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances A typical data point comprises of the following: - "polarity": which is a string value of either 0 or 1 indicating the sentiment around the review - "text": is the review plain text of a restaurant in Arabic - "restaurant_id": the restaurant ID on the website - "user_id": the user ID on the website example: ``` { 'polarity': 0, # negative 'restaurant_id': '1412', 'text': 'عادي جدا مامن زود', 'user_id': '21294' } ``` ### Data Fields - "polarity": is a string value of either 0 or 1 indicating the sentiment around the review - "text": is the review plain text of a restaurant in Arabic - "restaurant_id": the restaurant ID on the website (string) - "user_id": the user ID on the website (string) ### Data Splits The dataset is not split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Contains 8364 restaurant reviews from qaym.com #### Who are the source language producers? From tweeter. ### Annotations The polarity field provides a label of 1 or -1 pertaining to the sentiment of the review #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Discussion of Social Impact and Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @InProceedings{10.1007/978-3-319-18117-2_2, author="ElSahar, Hady and El-Beltagy, Samhaa R.", editor="Gelbukh, Alexander", title="Building Large Arabic Multi-domain Resources for Sentiment Analysis", booktitle="Computational Linguistics and Intelligent Text Processing", year="2015", publisher="Springer International Publishing", address="Cham", pages="23--34", isbn="978-3-319-18117-2" } ### Contributions Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset.
false
# Dataset Card for text2log ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** - **Repository:** [GitHub](https://github.com/alevkov/text2log) - **Paper:** - **Leaderboard:** - **Point of Contact:** https://github.com/alevkov ### Dataset Summary The dataset contains 100,000 simple English sentences selected and filtered from `enTenTen15` and their translation into First Order Logic (FOL) using `ccg2lambda`. ### Supported Tasks and Leaderboards 'semantic-parsing': The data set is used to train models which can generate FOL statements from natural language text ### Languages en-US ## Dataset Structure ### Data Instances ``` { 'clean':'All things that are new are good.', 'trans':'all x1.(_thing(x1) -> (_new(x1) -> _good(x1)))' } ``` ### Data Fields - 'clean': a simple English sentence - 'trans': the corresponding translation into Lambda Dependency-based Compositional Semantics ### Data Splits No predefined train/test split is given. The authors used a 80/20 split ## Dataset Creation ### Curation Rationale The text2log data set is used to improve FOL statement generation from natural text ### Source Data #### Initial Data Collection and Normalization Short text samples selected from enTenTen15 #### Who are the source language producers? See https://www.sketchengine.eu/ententen-english-corpus/ ### Annotations #### Annotation process Machine generated using https://github.com/mynlp/ccg2lambda #### Who are the annotators? none ### Personal and Sensitive Information The dataset does not contain personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information None given ### Citation Information ```bibtex @INPROCEEDINGS{9401852, author={Levkovskyi, Oleksii and Li, Wei}, booktitle={SoutheastCon 2021}, title={Generating Predicate Logic Expressions from Natural Language}, year={2021}, volume={}, number={}, pages={1-8}, doi={10.1109/SoutheastCon45413.2021.9401852} } ``` ### Contributions Thanks to [@apergo-ai](https://github.com/apergo-ai) for adding this dataset.
false
# Dataset Card for the EUR-Lex-Sum Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/achouhan93/eur-lex-sum - **Paper:** [EUR-Lex-Sum: A Multi-and Cross-lingual Dataset for Long-form Summarization in the Legal Domain](https://arxiv.org/abs/2210.13448) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Dennis Aumiller](mailto:aumiller@informatik.uni-heidelberg.de) ### Dataset Summary The EUR-Lex-Sum dataset is a multilingual resource intended for text summarization in the legal domain. It is based on human-written summaries of legal acts issued by the European Union. It distinguishes itself by introducing a smaller set of high-quality human-written samples, each of which have much longer references (and summaries!) than comparable datasets. Additionally, the underlying legal acts provide a challenging domain-specific application to legal texts, which are so far underrepresented in non-English languages. For each legal act, the sample can be available in up to 24 languages (the officially recognized languages in the European Union); the validation and test samples consist entirely of samples available in *all* languages, and are aligned across all languages at the paragraph level. ### Supported Tasks and Leaderboards - `summarization`: The dataset is primarily suitable for summarization tasks, where it can be used as a small-scale training resource. The primary evaluation metric used in the underlying experiments is [ROUGE](https://huggingface.co/metrics/rouge). The EUR-Lex-Sum data is particularly interesting, because traditional lead-based baselines (such as lead-3) do not work well, given the extremely long reference summaries. However, we can provide reasonably good summaries by applying a modified LexRank approach on the paragraph level. - `cross-lingual-summarization`: Given that samples of the dataset exist across multiple languages, and both the validation and test set are fully aligned across languages, this dataset can further be used as a cross-lingual benchmark. In these scenarios, language pairs (e.g., EN to ES) can be compared against monolingual systems. Suitable baselines include automatic translations of gold summaries, or translations of simple LexRank-generated monolingual summaries. - `long-form-summarization`: We further note the particular case for *long-form summarization*. In comparison to news-based summarization datasets, this resource provides around 10x longer *summary texts*. This is particularly challenging for transformer-based models, which struggle with limited context lengths. ### Languages The dataset supports all [official languages of the European Union](https://european-union.europa.eu/principles-countries-history/languages_en). At the time of collection, those were 24 languages: Bulgarian, Croationa, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, and Swedish. Both the reference texts, as well as the summaries, are translated from an English original text (this was confirmed by private correspondence with the Publications Office of the European Union). Translations and summaries are written by external (professional) parties, contracted by the EU. Depending on availability of document summaries in particular languages, we have between 391 (Irish) and 1505 (French) samples available. Over 80% of samples are available in at least 20 languages. ## Dataset Structure ### Data Instances Data instances contain fairly minimal information. Aside from a unique identifier, corresponding to the Celex ID generated by the EU, two further fields specify the original long-form legal act and its associated summary. ``` { "celex_id": "3A32021R0847", "reference": "REGULATION (EU) 2021/847 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL\n [...]" "summary": "Supporting EU cooperation in the field of taxation: Fiscalis (2021-2027)\n\n [...]" } ``` ### Data Fields - `celex_id`: The [Celex ID](https://eur-lex.europa.eu/content/tools/eur-lex-celex-infographic-A3.pdf) is a naming convention used for identifying EU-related documents. Among other things, the year of publication and sector codes are embedded in the Celex ID. - `reference`: This is the full text of a Legal Act published by the EU. - `summary`: This field contains the summary associated with the respective Legal Act. ### Data Splits We provide pre-split training, validation and test splits. To obtain the validation and test splits, we randomly assigned all samples that are available across all 24 languages into two equally large portions. In total, 375 instances are available in 24 languages, which means we obtain a validation split of 187 samples and 188 test instances. All remaining instances are assigned to the language-specific training portions, which differ in their exact size. We particularly ensured that no duplicates exist across the three splits. For this purpose, we ensured that no exactly matching reference *or* summary exists for any sample. Further information on the length distributions (for the English subset) can be found in the paper. ## Dataset Creation ### Curation Rationale The dataset was curated to provide a resource for under-explored aspects of automatic text summarization research. In particular, we want to encourage the exploration of abstractive summarization systems that are not limited by the usual 512 token context window, which usually works well for (short) news articles, but fails to generate long-form summaries, or does not even work with longer source texts in the first place. Also, existing resources primarily focus on a single (and very specialized) domain, namely news article summarization. We wanted to provide a further resource for *legal* summarization, for which many languages do not even have any existing datasets. We further noticed that no previous system had utilized the human-written samples from the [EUR-Lex platform](https://eur-lex.europa.eu/homepage.html), which provide an excellent source for training instances suitable for summarization research. We later found out about a resource created in parallel based on EUR-Lex documents, which provides a [monolingual (English) corpus](https://github.com/svea-klaus/Legal-Document-Summarization) constructed in similar fashion. However, we provide a more thorough filtering, and extend the process to the remaining 23 EU languages. ### Source Data #### Initial Data Collection and Normalization The data was crawled from the aforementioned EUR-Lex platform. In particular, we only use samples which have *HTML* versions of the texts available, which ensure the alignment across languages, given that translations have to retain the original paragraph structure, which is encoded in HTML elements. We further filter out samples that do not have associated document summaries available. One particular design choice has to be expanded upon: For some summaries, *several source documents* are considered as an input by the EU. However, since we construct a single-document summarization corpus, we decided to use the **longest reference document only**. This means we explicitly drop the other reference texts from the corpus. One alternative would have been to concatenated all relevant source texts; however, this generally leads to degradation of positional biases in the text, which can be an important learned feature for summarization systems. Our paper details the effect of this decision in terms of n-gram novelty, which we find is affected by the processing choice. #### Who are the source language producers? The language producers are external professionals contracted by the European Union offices. As previously noted, all non-English texts are generated from the respective English document (all summaries are direct translations the English summary, all reference texts are translated from the English reference text). No further information on the demographic of annotators is provided. ### Annotations #### Annotation process The European Union publishes their [annotation guidelines](https://etendering.ted.europa.eu/cft/cft-documents.html?cftId=6490) for summaries, which targets a length between 600-800 words. No information on the guidelines for translations is known. #### Who are the annotators? The language producers are external professionals contracted by the European Union offices. No further information on the annotators is available. ### Personal and Sensitive Information The original text was not modified in any way by the authors of this dataset. Explicit mentions of personal names can occur in the dataset, however, we rely on the European Union that no further sensitive information is provided in these documents. ## Considerations for Using the Data ### Social Impact of Dataset The dataset can be used to provide summarization systems in languages that are previously under-represented. For example, language samples in Irish and Maltese (among others) enable the development and evaluation for these languages. A successful cross-lingual system would further enable the creation of automated legal summaries for legal acts, possibly enabling foreigners in European countries to automatically translate similar country-specific legal acts. Given the limited amount of training data, this dataset is also suitable as a test bed for low-resource approaches, especially in comparsion to strong unsupervised (extractive) summarization systems. We also note that the summaries are explicitly provided as "not legally binding" by the EU. The implication of left-out details (a necessary evil of summaries) implies the existence of differences between the (legally binding) original legal act. Risks associated with this dataset also largely stem from the potential application of systems trained on it. Decisions in the legal domain require careful analysis of the full context, and should not be made based on system-generated summaries at this point in time. Known biases of summarization, specifically factual hallucinations, should act as further deterrents. ### Discussion of Biases Given the availability bias, some of the languages in the dataset are more represented than others. We attempt to mitigate influence on the evaluation by providing validation and test sets of the same size across all languages. Given that we require the availability of HTML documents, we see a particular temporal bias in our dataset, which features more documents from the years of 1990 onwards, simply due to the increase in EU-related activities, but also the native use of the internet as a data storage. This could imply a particular focus on more recent topics (e.g., Brexit, renewable eneriges, etc. come to mind). Finally, due to the source of these documents being the EU, we expect a natural bias towards EU-centric (and therefore Western-centric) content; other nations and continents will be under-represented in the data. ### Other Known Limitations As previously outlined, we are aware of some summaries relating to multiple (different) legal acts. For these samples, only one (the longest) text will be available in our dataset. ## Additional Information ### Dataset Curators The web crawler was originally implemented by Ashish Chouhan. Post-filtering and sample correction was later performed by Dennis Aumiller. Both were PhD students employed at the Database Systems Research group of Heidelberg University, under the guidance of Prof. Dr. Michael Gertz. ### Licensing Information Data from the EUR-Lex platform is available under the CC-BY SA 4.0 license. We redistribute the dataset under the same license. ### Citation Information For the pre-print version, please cite: ``` @article{aumiller-etal-2022-eur, author = {Aumiller, Dennis and Chouhan, Ashish and Gertz, Michael}, title = {{EUR-Lex-Sum: A Multi- and Cross-lingual Dataset for Long-form Summarization in the Legal Domain}}, journal = {CoRR}, volume = {abs/2210.13448}, eprinttype = {arXiv}, eprint = {2210.13448}, url = {https://arxiv.org/abs/2210.13448} } ```
false
# Dataset Card for "gap" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/gap-coreference](https://github.com/google-research-datasets/gap-coreference) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns](https://arxiv.org/abs/1810.05201) - **Point of Contact:** [gap-coreference@google.com](mailto:gap-coreference@google.com) - **Size of downloaded dataset files:** 2.40 MB - **Size of the generated dataset:** 2.43 MB - **Total amount of disk used:** 4.83 MB ### Dataset Summary GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of (ambiguous pronoun, antecedent name), sampled from Wikipedia and released by Google AI Language for the evaluation of coreference resolution in practical applications. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 2.40 MB - **Size of the generated dataset:** 2.43 MB - **Total amount of disk used:** 4.83 MB An example of 'validation' looks as follows. ``` { "A": "aliquam ultrices sagittis", "A-coref": false, "A-offset": 208, "B": "elementum curabitur vitae", "B-coref": false, "B-offset": 435, "ID": "validation-1", "Pronoun": "condimentum mattis pellentesque", "Pronoun-offset": 948, "Text": "Lorem ipsum dolor", "URL": "sem fringilla ut" } ``` ### Data Fields The data fields are the same among all splits. #### default - `ID`: a `string` feature. - `Text`: a `string` feature. - `Pronoun`: a `string` feature. - `Pronoun-offset`: a `int32` feature. - `A`: a `string` feature. - `A-offset`: a `int32` feature. - `A-coref`: a `bool` feature. - `B`: a `string` feature. - `B-offset`: a `int32` feature. - `B-coref`: a `bool` feature. - `URL`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default| 2000| 454|2000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{webster-etal-2018-mind, title = "Mind the {GAP}: A Balanced Corpus of Gendered Ambiguous Pronouns", author = "Webster, Kellie and Recasens, Marta and Axelrod, Vera and Baldridge, Jason", journal = "Transactions of the Association for Computational Linguistics", volume = "6", year = "2018", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/Q18-1042", doi = "10.1162/tacl_a_00240", pages = "605--617", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset.
false
# Dataset Card for GEM/dart ## Dataset Description - **Homepage:** n/a - **Repository:** https://github.com/Yale-LILY/dart - **Paper:** https://aclanthology.org/2021.naacl-main.37/ - **Leaderboard:** https://github.com/Yale-LILY/dart#leaderboard - **Point of Contact:** Dragomir Radev, Rui Zhang, Nazneen Rajani ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/dart). ### Dataset Summary DART is an English dataset aggregating multiple other data-to-text dataset in a common triple-based format. The new format is completely flat, thus not requiring a model to learn hierarchical structures, while still retaining the full information. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/dart') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/dart). #### website n/a #### paper [ACL Anthology](https://aclanthology.org/2021.naacl-main.37/) #### authors Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani ## Dataset Overview ### Where to find the Data and its Documentation #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Github](https://github.com/Yale-LILY/dart) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [ACL Anthology](https://aclanthology.org/2021.naacl-main.37/) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{nan-etal-2021-dart, title = "{DART}: Open-Domain Structured Data Record to Text Generation", author = "Nan, Linyong and Radev, Dragomir and Zhang, Rui and Rau, Amrit and Sivaprasad, Abhinand and Hsieh, Chiachun and Tang, Xiangru and Vyas, Aadit and Verma, Neha and Krishna, Pranav and Liu, Yangxiaokang and Irwanto, Nadia and Pan, Jessica and Rahman, Faiaz and Zaidi, Ahmad and Mutuma, Mutethia and Tarabar, Yasin and Gupta, Ankit and Yu, Tao and Tan, Yi Chern and Lin, Xi Victoria and Xiong, Caiming and Socher, Richard and Rajani, Nazneen Fatema", booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.naacl-main.37", doi = "10.18653/v1/2021.naacl-main.37", pages = "432--447", abstract = "We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and contain nontrivial structures. To this end, we propose a procedure of extracting semantic triples from tables that encodes their structures by exploiting the semantic dependencies among table headers and the table title. Our dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion, and predicate unification, all with minimum post-editing. We present systematic evaluation on DART as well as new state-of-the-art results on WebNLG 2017 to show that DART (1) poses new challenges to existing data-to-text datasets and (2) facilitates out-of-domain generalization. Our data and code can be found at https://github.com/Yale-LILY/dart.", } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Dragomir Radev, Rui Zhang, Nazneen Rajani #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> {dragomir.radev, r.zhang}@yale.edu, {nazneen.rajani}@salesforce.com #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> yes #### Leaderboard Link <!-- info: Provide a link to the leaderboard. --> <!-- scope: periscope --> [Leaderboard](https://github.com/Yale-LILY/dart#leaderboard) #### Leaderboard Details <!-- info: Briefly describe how the leaderboard evaluates models. --> <!-- scope: microscope --> Several state-of-the-art table-to-text models were evaluated on DART, such as BART ([Lewis et al., 2020](https://arxiv.org/pdf/1910.13461.pdf)), Seq2Seq-Att ([MELBOURNE](https://webnlg-challenge.loria.fr/files/melbourne_report.pdf)) and End-to-End Transformer ([Castro Ferreira et al., 2019](https://arxiv.org/pdf/1908.09022.pdf)). The leaderboard reports BLEU, METEOR, TER, MoverScore, BERTScore and BLEURT scores. ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Dialects <!-- info: What dialects are covered? Are there multiple dialects per language? --> <!-- scope: periscope --> It is an aggregated from multiple other datasets that use general US-American or British English without differentiation between dialects. #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English` #### Whose Language? <!-- info: Whose language is in the dataset? --> <!-- scope: periscope --> The dataset is aggregated from multiple others that were crowdsourced on different platforms. #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> mit: MIT License #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> The dataset is aimed to further research in natural language generation from semantic data. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Data-to-Text #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> The speaker is required to produce coherent sentences and construct a trees structured ontology of the column headers. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic`, `industry` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> Yale University, Salesforce Research, Penn State University, The University of Hong Kong, MIT #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Miruna Clinciu contributed the original data card and Yacine Jernite wrote the initial data loader. Sebastian Gehrmann migrated the data card and the loader to the new format. ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> -`tripleset`: a list of tuples, each tuple has 3 items -`subtree_was_extended`: a boolean variable (true or false) -`annotations`: a list of dict, each with source and text keys. -`source`: a string mentioning the name of the source table. -`text`: a sentence string. #### Reason for Structure <!-- info: How was the dataset structure determined? --> <!-- scope: microscope --> The structure is supposed to be able more complex structures beyond "flat" attribute-value pairs, instead encoding hierarchical relationships. #### How were labels chosen? <!-- info: How were the labels chosen? --> <!-- scope: microscope --> They are a combination of those from existing datasets and new annotations that take advantage of the hierarchical structure #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { "tripleset": [ [ "Ben Mauk", "High school", "Kenton" ], [ "Ben Mauk", "College", "Wake Forest Cincinnati" ] ], "subtree_was_extended": false, "annotations": [ { "source": "WikiTableQuestions_lily", "text": "Ben Mauk, who attended Kenton High School, attended Wake Forest Cincinnati for college." } ] } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> |Input Unit | Examples | Vocab Size | Words per SR | Sents per SR | Tables | | ------------- | ------------- || ------------- || ------------- || ------------- || ------------- | |Triple Set | 82,191 | 33.2K | 21.6 | 1.5 | 5,623 | | Train | Dev | Test| | ------------- | ------------- || ------------- | | 62,659 | 6,980 | 12,552| Statistics of DART decomposed by different collection methods. DART exhibits a great deal of topical variety in terms of the number of unique predicates, the number of unique triples, and the vocabulary size. These statistics are computed from DART v1.1.1; the number of unique predicates reported is post-unification (see Section 3.4). SR: Surface Realization. ([details in Table 1 and 2](https://arxiv.org/pdf/2007.02871.pdf)). #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> For WebNLG 2017 and Cleaned E2E, DART use the original data splits. For the new annotation on WikiTableQuestions and WikiSQL, random splitting will make train, dev, and test splits contain similar tables and similar <triple-set, sentence> examples. They are thus split based on Jaccard similarity such that no training examples has a similarity with a test example of over 0.5 ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality sentence annotations with each input being a set of entity-relation triples following a tree-structured ontology. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> yes #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> no #### Difference from other GEM datasets <!-- info: What else sets this dataset apart from other similar datasets in GEM? --> <!-- scope: microscope --> The tree structure is unique among GEM datasets #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> Reasoning, surface realization ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> no #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> Experimental results on DART shows that BART model as the highest performance among three models with a BLEU score of 37.06. This is attributed to BART’s generalization ability due to pretraining ([Table 4](https://arxiv.org/pdf/2007.02871.pdf)). ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Reasoning, surface realization #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `BLEU`, `MoverScore`, `BERT-Score`, `BLEURT` #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> The leaderboard uses the combination of BLEU, METEOR, TER, MoverScore, BERTScore, PARENT and BLEURT to overcome the limitations of the n-gram overlap metrics. A small scale human annotation of 100 data points was conducted along the dimensions of (1) fluency - a sentence is natural and grammatical, and (2) semantic faithfulness - a sentence is supported by the input triples. #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes #### Other Evaluation Approaches <!-- info: What evaluation approaches have others used? --> <!-- scope: periscope --> n/a #### Relevant Previous Results <!-- info: What are the most relevant previous results for this task/dataset? --> <!-- scope: microscope --> BART currently achieves the best performance according to the leaderboard. ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> The dataset creators encourage through DART further research in natural language generation from semantic data. DART provides high-quality sentence annotations with each input being a set of entity-relation triples in a tree structure. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> The speaker is required to produce coherent sentences and construct a trees structured ontology of the column headers. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> yes #### Source Details <!-- info: List the sources (one per line) --> <!-- scope: periscope --> - human annotation on open-domain Wikipedia tables from WikiTableQuestions ([Pasupat and Liang, 2015](https://www.aclweb.org/anthology/P15-1142.pdf)) and WikiSQL ([Zhong et al., 2017](https://arxiv.org/pdf/1709.00103.pdf)) - automatic conversion of questions in WikiSQL to declarative sentences - incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017[a](https://www.aclweb.org/anthology/P17-1017.pdf),[b](https://www.aclweb.org/anthology/W17-3518.pdf); [Shimorina and Gardent, 2018](https://www.aclweb.org/anthology/W18-6543.pdf)) and Cleaned E2E ([Novikova et al., 2017b](https://arxiv.org/pdf/1706.09254.pdf); Dušek et al., [2018](https://arxiv.org/pdf/1810.01170.pdf), [2019](https://www.aclweb.org/anthology/W19-8652.pdf)) ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found`, `Created for the dataset` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Offline media collection` #### Creation Process <!-- info: If created for the dataset, describe the creation process. --> <!-- scope: microscope --> Creators proposed a two-stage annotation process for constructing triple set sentence pairs based on a tree-structured ontology of each table. First, internal skilled annotators denote the parent column for each column header. Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row. To form a triple set sentence pair, the highlighted cells can be converted to a connected triple set automatically according to the column ontology for the given table. #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> No further information about the MTurk workers has been provided. #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> The sub-datasets are from Wikipedia, DBPedia, and artificially created restaurant data. #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> validated by crowdworker #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> not filtered ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> none #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no #### Justification for Using the Data <!-- info: If not, what is the justification for reusing the data? --> <!-- scope: microscope --> The new annotations are based on Wikipedia which is in the public domain and the other two datasets permit reuse (with attribution) ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> no PII #### Justification for no PII <!-- info: Provide a justification for selecting `no PII` above. --> <!-- scope: periscope --> None of the datasets talk about individuals ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> no ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> no #### Are the Language Producers Representative of the Language? <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? --> <!-- scope: periscope --> No, the annotators are raters on crowdworking platforms and thus only represent their demographics. ## Considerations for Using the Data ### PII Risks and Liability ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `open license - commercial use allowed` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `open license - commercial use allowed` ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> The dataset may contain some social biases, as the input sentences are based on Wikipedia (WikiTableQuestions, WikiSQL, WebNLG). Studies have shown that the English Wikipedia contains gender biases([Dinan et al., 2020](https://www.aclweb.org/anthology/2020.emnlp-main.23.pdf)), racial biases([Papakyriakopoulos et al., 2020 (https://dl.acm.org/doi/pdf/10.1145/3351095.3372843)) and geographical bias([Livingstone et al., 2010](https://doi.org/10.5204/mcj.315)). [More info](https://en.wikipedia.org/wiki/Racial_bias_on_Wikipedia#cite_note-23). #### Unsuited Applications <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. --> <!-- scope: microscope --> The end-to-end transformer has the lowest performance since the transformer model needs intermediate pipeline planning steps to have higher performance. Similar findings can be found in [Castro Ferreira et al., 2019](https://arxiv.org/pdf/1908.09022.pdf).
false
# Dataset Card for TopiOCQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [TopiOCQA homepage](https://mcgill-nlp.github.io/topiocqa/) - **Repository:** [TopiOCQA Github](https://github.com/McGill-NLP/topiocqa) - **Paper:** [Open-domain Conversational Question Answering with Topic Switching](https://arxiv.org/abs/2110.00768) - **Point of Contact:** [Vaibhav Adlakha](mailto:vaibhav.adlakha@mila.quebec) ### Dataset Summary TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena. ### Languages The language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en. ## Additional Information ### Licensing Information TopiOCQA is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information ``` @inproceedings{adlakha2022topiocqa, title={Topi{OCQA}: Open-domain Conversational Question Answering with Topic Switching}, author={Adlakha, Vaibhav and Dhuliawala, Shehzaad and Suleman, Kaheer and de Vries, Harm and Reddy, Siva}, journal={Transactions of the Association for Computational Linguistics}, volume = {10}, pages = {468-483}, year = {2022}, month = {04}, year={2022}, issn = {2307-387X}, doi = {10.1162/tacl_a_00471}, url = {https://doi.org/10.1162/tacl\_a\_00471}, eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00471/2008126/tacl\_a\_00471.pdf}, } ```
false
# Dataset Card for GooAQ ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GooAQ 🥑: Google Answers to Google Questions!](https://github.com/allenai/gooaq) - **Repository:** [GooAQ 🥑: Google Answers to Google Questions!](https://github.com/allenai/gooaq) - **Paper:** [GOOAQ: Open Question Answering with Diverse Answer Types](https://arxiv.org/abs/2104.08727) - **Point of Contact:** [Daniel Khashabi](danielk@allenai.org) ### Dataset Summary GooAQ is a large-scale dataset with a variety of answer types. This dataset contains over 5 million questions and 3 million answers collected from Google. GooAQ questions are collected semi-automatically from the Google search engine using its autocomplete feature. This results in naturalistic questions of practical interest that are nonetheless short and expressed using simple language. GooAQ answers are mined from Google's responses to our collected questions, specifically from the answer boxes in the search results. This yields a rich space of answer types, containing both textual answers (short and long) as well as more structured ones such as collections. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset contains samples in English only. ## Dataset Structure ### Data Instances Each row of the data file should look like this: ``` { "id": 3339543, "question": "what is the difference between collagen and whey protein?", "short_answer": None, "answer": "The main differences between the amino acid profiles of whey and collagen are that whey contains all 9 essential amino acids, while collagen only has 8. ... Collagen is a fibrous protein found in the skin, cartilage, and bones of animals whereas whey comes from milk.", "answer_type": "feat_snip" } ``` where the questions `question` are collected via Google auto-complete. The answers responses (`short_answer` and `answer`) were collected from Google's answer boxes. The answer types (`answer_type`) are inferred based on the html content of Google's response. Here is the dominant types in the current dataset: - `feat_snip`: explanatory responses; the majoriy the question/responses are of this type. - `collection`: list responses (e.g., steps to accomplish something). - `knowledge`: typically short responses for knowledge seeking questions. - `unit_conv`: questions about converting units. - `time_conv`: questions about converting times. - `curr_conv`: questions about converting currencies. Dataset instances which are not part of dominant types are marked with -1 label. ### Data Fields - `id`: an `int` feature. - `question`: a `string` feature. - `short_answer`: a `string` feature (could be None as well in some cases). - `answer`: a `string` feature (could be None as well in some cases). - `answer_type`: a `string` feature. ### Data Splits Number of samples in train/validation/test set are given below: | Split | Number of samples | |------------|-------------------| | Train | 3112679 | | Validation | 2500 | | Test | 2500 | ## Dataset Creation ### Curation Rationale While day-to-day questions come with a variety of answer types, the current question-answering (QA) literature has failed to adequately address the answer diversity of questions. Many of the everyday questions that humans deal with and pose to search engines have a more diverse set of responses. Their answer can be a multi-sentence description (a snippet) (e.g., ‘what is’ or ‘can you’ questions), a collection of items such as ingredients (‘what are’, ‘things to’) or of steps towards a goal such as unlocking a phone (‘how to’), etc. Even when the answer is short, it can have richer types, e.g., unit conversion, time zone conversion, or various kinds of knowledge look-up (‘how much’, ‘when is’, etc.). Such answer type diversity is not represented in any existing dataset. ### Source Data #### Initial Data Collection and Normalization Construction this dataset involved two main steps, extracting questions from search auto-complete and extracting answers from answer boxes. 1) Query Extraction: To extract a rich yet natural set of questions they used Google auto-completion. They start with a seed set of question terms (e.g., “who”, “where”, etc.). They bootstrap based on this set, by repeatedly querying prefixes of previously extracted questions, in order to discover longer and richer sets of questions. Such questions extracted from the autocomplete algorithm are highly reflective of popular questions posed by users of Google. They filter out any questions shorter than 5 tokens as they are often in-complete questions. This process yields over ∼5M questions, which were collected over a span of 6 months. The average length of the questions is about 8 tokens. 2) Answer Extraction: They rely on the Google answer boxes shown on top of the search results when the questions are issued to Google. There are a variety of answer boxes. The most common kind involves highlighted sentences (extracted from various websites) that contain the answer to a given question. These form the snippet and collection answers in GOOAQ. In some cases, the answer box shows the answer directly, possibly in addition to the textual snippet. These form theshort answers in GOOAQ. They first scrape the search results for all questions. This is the main extraction bottleneck, which was done over a span of 2 months. Subsequently, they extract answer strings from the HTML content of the search results. Answer types are also inferred at this stage, based on the HTML tags around the answer. #### Who are the source language producers? Answered above. ### Annotations #### Annotation process Answered in above section. #### Who are the annotators? Since their task is focused on English, they required workers to be based in a country with a population predominantly of native English speakers (e.g., USA, Canada, UK, and Australia) and have completed at least 5000 HITs with ≥ 99% assignment approval rate. Additionally, they have a qualification test with half-a-dozen questions all of which need to be answered correctly by the annotators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases To prevent biased judgements, they also ask the annotators to avoid using Google search (which is what they used when mined GOOAQ) when annotating the quality of shown instances. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. ### Citation Information ``` @article{gooaq2021, title={GooAQ: Open Question Answering with Diverse Answer Types}, author={Khashabi, Daniel and Ng, Amos and Khot, Tushar and Sabharwal, Ashish and Hajishirzi, Hannaneh and Callison-Burch, Chris}, journal={arXiv preprint}, year={2021} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
true
# Dataset Card for Interpress Turkish News Category Dataset (270K - Lite Version) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Interpress](https://www.interpress.com/) - **Point of Contact:** [Yavuz Komecoglu](mailto:yavuz.komecoglu@kodiks.com) ### Dataset Summary Turkish News Category Dataset (270K - Lite Version) is a Turkish news data set consisting of 273601 news in 10 categories ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem"), compiled from printed media and news websites between 2010 and 2017 by the Interpress (https://www.interpress.com/) media monitoring company. **It has been rearranged as easily separable and with fewer classes.** ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Turkish. ## Dataset Structure ### Data Instances A text classification dataset with 10 different news category. Here is an example from the dataset: ``` { 'category': 0, 'content': 'Tarihten Sınıfta Kaldık Bugün tarihe damgasını vuran Osmanlı İmparatorluğu nun kuruluş yıldönümü. Adına dizilerin çekildiği tarihimizi ne kadar biliyoruz? Gerekçeler faklı; ama sonuç aynı çıktı. Tarihten sınıfta kaldık. Sayfa 5r 1 Bugün tarihe damgasını vuran Osmanlı İmparatorluğumun kuruluş yıldönümü. Adına dizilerin çekildiği tarihimizi ne kadar biliyoruz? Gerekçeler faklı; ama sonuç aynı çıktı. Tarihten sınıfta kaldık 7 Ocak 1299... Kıtalara dağılan ücüyle, ülkeler arasında gördüğü aygıyla tarihe damgasını vuran anlı devletin kuruluş tarihi. Peki, anlı tarihimizi ne kadar biliyoruz? on zamanlarda tarihimizi anlatan izilere ilgi nasıl? Bu dizilerde anlatanlar ne kadar sağlıklı? İşte sokaın değerlendirmesi; levlüdiye Karaman (42-Ev lamım): Bir bilgim yok. Tarihle izla ilgilenmiyorum. Eşim daha ilgilidir bu konuda. Evde anlatır, ndan duyduklarımla yetiniyorum esem yalan olmaz. Osmanlı döeminde yaşamak isterdim. Tarih izileri izlerim Muhteşem Yüzyıl izisini çok izledim; hatta hiç kaırmazdım. Ama tarihimiz bu değil. Sunuün bilincindeyim. Muhteşem üzyıl dizisi genelde haremiyle ön landaydı. Onun için tarihi diziden ğrenmeyi de doğru bulmuyorum. )kullarda verilen tarih dersleri yeisiz. Daha çok tanıtabilirler. Görel anlatım yapılsın çocuklarımız aten okumak istemiyor. En azman eğlenceli hale getirip bu şekilde ilgilendirebilirler. erdi Üstün (22-Saatçi): Bu gün Osmanlı Devleti nin kuruluş yıldönümü olduğunu bilmiyordum. O dönemde yaşamak isterdim. Tarih yazılmış neden yaşamak istemeyim ki. Tarihime yeterince hakim olduğumu düşünüyorum. Araştırmalar yapıyorum. Merak ediyorum. Okullarda verilen tarih dersleri yeterli. Tarih dizisi izlemem, televizyondan tarihimi öğrenmek bana mantıklı gelmiyor. Yeterli olabilir; ama hikayeleştiriliyor. Sonuçta olduğu gibi anlatılsa daha iyi olur. Songül Karabacak (40-Ev Hanımı): Kuruluş yıldönümü olduğunu bilmiyordum. Tarih bilgim çok azdır. Zaten biz yaşadığımız dönemde tarih yazıyoruz. Osmanlı Dönemi nde yaşamak istemezdim. Sebebini bilmiyorum; ama hayatımdan memnunum, dönemden de memnunum. Dizileri takip etmiyorum. Ama mutlaka dizilerde tarihimiz doğru yansıtılıyor ki insanlar sürekli takip ediyor. Benim televizyonla pek aram yoktur. Ertuğrul Şahin (47-Çalışmıyor): Kuruluş yıldönümü olduğunu bilmiyordum. Sizden öğrendim. O dönemde yaşamak isterdim. Tarih sonuçta merak ederim. Tarihle ilgili çok bilgim yok. Okumadım, zaten şartlar el vermedi. Okullarda verilen eğitim yeterli değil. Örnek vermek gerekirse; 20 yaşında oğlum var Atatürk ün doğum yılını soruyorum yüzüme bakıyor. Verilen eğitim belli. Konu belirliyorlar onun dışına çıkmıyorlar. Daha fazla bilgi verilebilir. Tabi gençlerimizde de suç var bize baksınlar tarihimizi bilmiyoruz. Onlar araştırma yapsınlar her gün internette geziyorlar faydasız bir şeye bakacaklarına ecdatlarını okusunlar. Tarih dizlerini izlerim. Ama doğru yansıtılıyor mu orasını bilmiyorum sadece izleyiciyim. Ama önceden Süleyman Şah ı duyardım. Büyüklerimiz anlatırdı bunu diziden teyit ettim mesela. Ahmet Efe (22-Muhasebeci): Kuruluş yıldönümü olduğuyla ilgili bir bilgim yok. O dönemde yaşamak isterdim. Aldığımız bilgiler sonucunda illa ki bir özenme oluyor. Tam anlamıyla tarih bilgisine sahip olduğumu düşünmüyorum. Tarihe merakım var aslında; ama çok kısıtlı araştırma yapıyorum. Okullarda verilen tarih dersi yeterli değil. Çünkü şuradan birkaç çocuğu çevirip sorsanız size yeterli bilgi vermez. Veremez onun da bilgisi yok sonuçta. Zaten kısıtlı bilgiler veriliyor. Tarih dizilerini kılıç kalkan kuşanıp izliyorum. Doğru yansıtılıyor bundan dolayı da biraz insanlar tarihini öğrenmeye başladı desek yalan olmaz. Bu ne kadar doğru derseniz de bilgiyi doğru verdikten sonra tabi diziden de tarih öğrenilebilir. Mehmet Ak (28-Satış Danışmanı): Kuruluşunun bugün olduğunu bilmiyordum. O dönemde yaşamak isterdim. Yeterli bilgim yok bence kim tarihi tam anlamıyla öğrenebilir ki zaten. Ama tabi tarih kitapları okuyorum, araştırıyorum. Okullarda verilen tarih derslerini yeterli bulmuyorum; ama daha fazla neler yapılabilir, tarih küçüklere nasıl anlatılır bilmiyorum tek bildiğim yeterli olmadığı. Tarih dizileri gerçeği yüzde 75 yansıtıyor. Bu konuda araştırma yaptım yüzeysel anlatılıyor; fakat yine de bilgi edinilebilecek diziler. En azından rutinleşmiş dizi konularından uzak. Aile ile rahat rahat izleyebilirsin. Hasan Çalık (65-Emekli): Kuruluş yıldönümü olduğunu biliyorum. Araştırma yaparım. O dönemde yaşamak istemezdim Cumhuriyet döneminde yaşamayı daha çok isterdim. Okullarda verilen dersler yeterli. Film ya da dizi okumak yerine kitap okumayı tercih ederim. Bir insan ancak kitap okuyarak aydınlanabilir. Bu şekilde kendini geliştirebilir. Bir ömre ne kadar kitap sığdırırsan o kadar aydın bir insan olursun. Konusu fark etmez ister tarih olsun, ister roman okumak her zaman kazanç sağlar. Bir diziden tarihi ne kadar yeterli öğrenebilirsin ki ya da ne kadar doğru anlatılabilir. Bence diziyi bırakıp kitaplara yönelsinler. Nuray Çelik' } ``` ### Data Fields - **category** : Indicates to which category the news text belongs. (Such as "kültürsanat" (0), "ekonomi" (1), "siyaset" (2), "eğitim" (3), "dünya" (4), "spor" (5), "teknoloji" (6), "magazin" (7), "sağlık" (8), "gündem" (9)) - **content** : Contains the text of the news. ### Data Splits The data is split into a training and testing. The split is organized as the following | | train | test | |------------|--------:|-------:| | data split | 218,880 | 54,721 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Downloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (https://www.interpress.com/) media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos. #### Who are the source language producers? Turkish printed news sources and online news sites. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information https://www.interpress.com/ ### Contributions Thanks to [@basakbuluz](https://github.com/basakbuluz) & [@yavuzkomecoglu](https://github.com/yavuzkomecoglu) & [@serdarakyol](https://github.com/serdarakyol/) for adding this dataset.
true
## Dataset Description TAPE (Text Attack and Perturbation Evaluation) is a novel benchmark for few-shot Russian language understanding evaluation that includes six complex NLU tasks, covering multi-hop reasoning, ethical concepts, logic and commonsense knowledge. TAPE's design focuses on systematic zero-shot and few-shot NLU evaluation across different axes: - subpopulations for nuanced interpretation - linguistic-oriented adversarial attacks and perturbations for analysing robustness General data collection principles of TAPE are based on combining "intellectual abilities" needed to solve GLUE-like tasks, ranging from world knowledge to logic and commonsense reasoning. Based on the GLUE format, we have built six new datasets from the ground up, each of them requiring the modeling abilities of at least two skills: - reasoning and logic (Winograd scheme); - reasoning and world knowledge (CheGeKa, and RuOpenBookQA and RuWorldTree); - multi-hop reasoning (MultiQ); - ethical judgments + reasoning (Ethics). ## Dataset Structure ![eval_setup](evaluation_setup.png) - **(a)** D<sub>test</sub> is passed to the adversarial framework to create the adversarial D<sub>test</sub> that includes the original and adversarial examples. - **(b)** We randomly sample five sets of demonstration examples from D<sub>train</sub> for each `k ∈ {1, 4, 8}`. In the zero-shot scenario, we skip this stage. - **(c)** After that, we merge the demonstrations, when applicable, with the examples from the adversarial D<sub>test</sub> to construct evaluation episodes. - **(d)** Each episode is used to obtain predictions from the model. - **(e)** The performance is summarized in a diagnostic evaluation report. The perturbations, included in the framework, can be divided into two categories: - **Word-Level Perturbations**: spelling (mimicking spelling mistakes) and modality (replacement of the input with emojis) - **Sentence-Level Perturbations**: random (token deletion and swaps), distraction (generation of additional text) and paraphrases (generating context variations) Refer to the [TAPE paper](https://arxiv.org/abs/2210.12813) or the [RuTransform repo](https://github.com/RussianNLP/rutransform) for more information. ## Tasks ### Winograd The Winograd schema challenge composes tasks with syntactic ambiguity, which can be resolved with logic and reasoning. ##### **Motivation** The dataset presents an extended version of a traditional Winograd challenge [(Levesque et al., 2012)](https://www.aaai.org/ocs/index.php/KR/KR12/paper/viewFile/4492/4924): each sentence contains unresolved homonymy, which can be resolved based on commonsense and reasoning. The Winograd scheme is extendable with the real-life sentences filtered out of the National Corpora with a set of 11 syntactic queries, extracting sentences like *"**Katya** asked **Masha** if **she**..."* (two possible references to a pronoun), *"A **change** of **scenery** **that**..."* (Noun phrase & subordinate clause with "that" in the same gender and number), etc. The extraction pipeline can be adjusted to various languages depending on the set of ambiguous syntactic constructions possible. #### Dataset Composition ##### **Data Instances** Each instance in the dataset is a sentence with unresolved homonymy. ``` { 'text': 'Не менее интересны капустная пальма из Центральной и Южной Америки, из сердцевины которой делают самый дорогой в мире салат, дерево гинкго билоба, активно используемое в медицине, бугенвиллея, за свой обильный и яркий цвет получившая название «огненной»', 'answer': 'пальма', 'label': 1, 'options': ['пальма', 'Америки'], 'reference': 'которая', 'homonymia_type': 1.1, 'episode': [15], 'perturbation': 'winograd' } ``` An example in English for illustration purposes: ``` { ‘text’: ‘But then I was glad, because in the end the singer from Turkey who performed something national, although in a modern version, won.’, ‘answer’: ‘singer’, ‘label’: 1, ‘options’: [‘singer’, ‘Turkey’], ‘reference’: ‘who’, ‘homonymia_type’: ‘1.1’, episode: [15], ‘perturbation’ : ‘winograd’ } ``` ##### **Data Fields** - `text`: a string containing the sentence text - `answer`: a string with a candidate for the coreference resolution - `options`: a list of all the possible candidates present in the text - `reference`: a string containing an anaphor (a word or phrase that refers back to an earlier word or phrase) - `homonymia_type`: a float corresponding to the type of the structure with syntactic homonymy - `label`: an integer, either 0 or 1, indicating whether the homonymy is resolved correctly or not - `perturbation`: a string containing the name of the perturbation applied to text. If no perturbation was applied, the dataset name is used - `episode`: a list of episodes in which the instance is used. Only used for the train set ##### **Data Splits** The dataset consists of a training set with labeled examples and a test set in two configurations: - `raw data`: includes the original data with no additional sampling - `episodes`: data is split into evaluation episodes and includes several perturbations of test for robustness evaluation The train and test sets are disjoint with respect to the sentence-candidate answer pairs but may include overlaps in individual sentences and homonymy type. ##### **Test Perturbations** Each training episode in the dataset corresponds to six test variations, including the original test data and five adversarial test sets, acquired through the modification of the original test through the following text perturbations: - **ButterFingers**: randomly adds noise to data by mimicking spelling mistakes made by humans through character swaps based on their keyboard distance - **Emojify**: replaces the input words with the corresponding emojis, preserving their original meaning - **EDA<sub>delete</sub>**: randomly deletes tokens in the text - **EDA<sub>swap</sub>**: randomly swaps tokens in the text - **AddSent**: generates extra words or a sentence at the end of the text ##### **General Statistics** The following table contains the number of examples in each data split and the label distribution: | Split | Size (Original/Perturbed) | Label Distribution | |----------------|---------------------------|--------------------| | Train.raw | 804 | 66.3 / 33.7 | | Test.raw | 3458 | 58.1 / 41.9 | | Train.episodes | 60 | 72.8 / 27.1 | | Test.episodes | 976 / 5856 | 58.0 / 42.0 | - `Original` - original test data without adversarial perturbations - `Perturbed` - perturbed test, containing both original data and its perturbations #### Dataset Creation ##### **Data Source** The texts for the dataset are taken from the [Russian National Corpus](https://ruscorpora.ru/en/), the most representative and authoritative corpus of the Russian language available. The corpus includes texts from several domains, including news, fiction, and the web. ##### **Data Collection** The texts for the Winograd scheme problem are obtained using a semi-automatic pipeline. First, lists of 11 typical grammatical structures with syntactic homonymy (mainly case) are compiled. For example, two noun phrases with a complex subordinate: ``` 'A trinket from Pompeii that has survived the centuries.' ``` Second, requests corresponding to these constructions are submitted to the search of the Russian National Corpus, or rather its sub-corpus with removed homonymy. Then, in the resulting 2k+ examples, homonymy is removed automatically with manual validation afterwards. Each original sentence is split into multiple examples in the binary classification format, indicating whether the homonymy is resolved correctly or not. [Sakaguchi et al. (2019)](https://ojs.aaai.org//index.php/AAAI/article/view/6399) showed that the data Winograd Schema challenge might contain potential biases. We use the AFLite algorithm to filter out any potential biases in the data to make the test set more challenging for models. However, we do not guarantee that no spurious biases exist in the data. ### RuWorldTree RuWorldTree is a QA dataset with multiple-choice elementary-level science questions, which evaluate the understanding of core science facts. ##### **Motivation** The WorldTree dataset starts the triad of the Reasoning and Knowledge tasks. The data includes the corpus of factoid utterances of various kinds, complex factoid questions and a corresponding causal chain of facts from the corpus resulting in a correct answer. The WorldTree design was originally proposed in [(Jansen et al., 2018)](https://aclanthology.org/L18-1433/). #### Dataset Composition ##### **Data Instances** Each instance in the datasets is a multiple-choice science question with 4 answer options. ``` { 'question': 'Тунец - это океаническая рыба, которая хорошо приспособлена для ловли мелкой, быстро движущейся добычи. Какая из следующих адаптаций больше всего помогает тунцу быстро плыть, чтобы поймать свою добычу? (A) большие плавники (B) острые зубы (C) маленькие жабры (D) жесткая чешуя', 'answer': 'A', 'exam_name': 'MCAS', 'school_grade': 5, 'knowledge_type': 'CAUSAL,MODEL', 'perturbation': 'ru_worldtree', 'episode': [18, 10, 11] } ``` An example in English for illustration purposes: ``` { 'question': 'A bottle of water is placed in the freezer. What property of water will change when the water reaches the freezing point? (A) color (B) mass (C) state of matter (D) weight', 'answer': 'C', 'exam_name': 'MEA', 'school_grade': 5, 'knowledge_type': 'NO TYPE', 'perturbation': 'ru_worldtree', 'episode': [18, 10, 11] } ``` ##### **Data Fields** - `text`: a string containing the sentence text - `answer`: a string with a candidate for the coreference resolution - `options`: a list of all the possible candidates present in the text - `reference`: a string containing an anaphor (a word or phrase that refers back to an earlier word or phrase) - `homonymia_type`: a float corresponding to the type of the structure with syntactic homonymy - `label`: an integer, either 0 or 1, indicating whether the homonymy is resolved correctly or not - `perturbation`: a string containing the name of the perturbation applied to text. If no perturbation was applied, the dataset name is used - `episode`: a list of episodes in which the instance is used. Only used for the train set ##### **Data Splits** The dataset consists of a training set with labeled examples and a test set in two configurations: - `raw data`: includes the original data with no additional sampling - `episodes`: data is split into evaluation episodes and includes several perturbations of test for robustness evaluation We use the same splits of data as in the original English version. ##### **Test Perturbations** Each training episode in the dataset corresponds to seven test variations, including the original test data and six adversarial test sets, acquired through the modification of the original test through the following text perturbations: - **ButterFingers**: randomly adds noise to data by mimicking spelling mistakes made by humans through character swaps based on their keyboard distance - **Emojify**: replaces the input words with the corresponding emojis, preserving their original meaning - **EDA<sub>delete</sub>**: randomly deletes tokens in the text - **EDA<sub>swap</sub>**: randomly swaps tokens in the text - **BackTranslation**: generates variations of the context through back-translation (ru -> en -> ru) - **AddSent**: replaces one or more choice options with a generated one ##### **General Statistics** The following table contains the number of examples in each data split and the label distribution: | Split | Size (Original/Perturbed) | Label Distribution | |----------------|---------------------------|-------------------------------| | Train.raw | 118 | 28.81 / 26.27 / 22.88 / 22.03 | | Test.raw | 633 | 22.1 / 27.5 / 25.6 / 24.8 | | Train.episodes | 47 | 29.79 / 23.4 / 23.4 / 23.4 | | Test.episodes | 629 / 4403 | 22.1 / 27.5 / 25.6 / 24.8 | - `Original` - original test data without adversarial perturbations - `Perturbed` - perturbed test, containing both original data and its perturbations #### Dataset Creation ##### **Data Source** The questions for the dataset are taken from the original WorldTree dataset, which was sourced from the AI2 Science Questions V2 corpus, consisting of both standardized exam questions from 12 US states, and the AI2 Science Questions Mercury dataset, a set of questions licensed from a student assessment entity. ##### **Data Collection** The dataset mainly consists of automatic translation of the English WorldTree Corpus and human validation and correction. ### RuOpenBook RuOpenBookQA is a QA dataset with multiple-choice elementary-level science questions which probe the understanding of core science facts. ##### **Motivation** RuOpenBookQA is mainly based on the work of [(Mihaylov et al., 2018)](https://aclanthology.org/D18-1260/): it is a QA dataset with multiple-choice elementary-level science questions, which probe the understanding of 1k+ core science facts. Very similar to the pipeline of the RuWorldTree, the dataset includes a corpus of factoids, factoid questions and correct answer. Only one fact is enough to find the correct answer, so this task can be considered easier. #### Dataset Composition ##### **Data Instances** Each instance in the datasets is a multiple-choice science question with 4 answer options. ``` { 'ID': '7-674', 'question': 'Если животное живое, то (A) оно вдыхает воздух (B) оно пытается дышать (C) оно использует воду (D) оно стремится к воспроизводству', 'answer': 'A', 'episode': [11], 'perturbation': 'ru_openbook' } ``` An example in English for illustration purposes: ``` { 'ID': '7-674', 'question': 'If a person walks in the direction opposite to the compass needle, they are going (A) west (B) north (C) east (D) south', 'answer': 'D', 'episode': [11], 'perturbation': 'ru_openbook' } ``` ##### **Data Fields** - `ID`: a string containing a unique question id - `question`: a string containing question text with answer options - `answer`: a string containing the correct answer key (A, B, C or D) - `perturbation`: a string containing the name of the perturbation applied to text. If no perturbation was applied, the dataset name is used - `episode`: a list of episodes in which the instance is used. Only used for the train set ##### **Data Splits** The dataset consists of a training set with labeled examples and a test set in two configurations: - `raw data`: includes the original data with no additional sampling - `episodes`: data is split into evaluation episodes and includes several perturbations of test for robustness evaluation ##### **Test Perturbations** Each training episode in the dataset corresponds to seven test variations, including the original test data and six adversarial test sets, acquired through the modification of the original test through the following text perturbations: - **ButterFingers**: randomly adds noise to data by mimicking spelling mistakes made by humans through character swaps based on their keyboard distance - **Emojify**: replaces the input words with the corresponding emojis, preserving their original meaning - **EDA<sub>delete</sub>**: randomly deletes tokens in the text - **EDA<sub>swap</sub>**: randomly swaps tokens in the text - **BackTranslation**: generates variations of the context through back-translation (ru -> en -> ru) - **AddSent**: replaces one or more choice options with a generated one ##### **General Statistics** The following table contains the number of examples in each data split and the label distribution: | Split | Size (Original/Perturbed) | Label Distribution | |----------------|---------------------------|-------------------------------| | Train.raw | 2339 | 31.38 / 23.64 / 21.76 / 23.22 | | Test.raw | 500 | 25.2 / 27.6 / 22.0 / 25.2 | | Train.episodes | 48 | 27.08 / 18.75 / 20.83 / 33.33 | | Test.episodes | 500 / 3500 | 25.2 / 27.6 / 22.0 / 25.2 | - `Original` - original test data without adversarial perturbations - `Perturbed` - perturbed test, containing both original data and its perturbations #### Dataset Creation ##### **Data Source** The questions are taken from the original OpenBookQA dataset, created via multi-stage crowdsourcing and partial expert filtering. ##### **Data Collection** The dataset mainly consists of automatic translation of the English OpenBookQA and human validation and correction. ### Ethics<sub>1</sub> Ethics<sub>1</sub> (sit ethics) dataset is created to test the knowledge of the basic concepts of morality. The task is to predict human ethical judgments about diverse text situations in a multi-label classification setting. Namely, the task requires models to identify the presence of concepts in normative ethics, such as virtue, law, moral, justice, and utilitarianism. ##### **Motivation** There is a multitude of approaches to evaluating ethics in machine learning. The Ethics dataset for Russian is created from scratch for the first time, relying on the design compatible with [(Hendrycks et al., 2021)](https://paperswithcode.com/paper/aligning-ai-with-shared-human-values/). #### Dataset Composition ##### **Data Instances** Data instances are given as excerpts from news articles and fiction texts. ``` { 'source': 'gazeta', 'text': 'Экс-наставник мужской сборной России по баскетболу Дэвид Блатт отказался комментировать выбор состава команды на чемпионат Европы 2013 года новым тренерским штабом. «Если позволите, я бы хотел воздержаться от комментариев по сборной России, потому что это будет примерно такая же ситуация, когда человек, который едет на заднем сиденье автомобиля, лезет к водителю с советами, — приводит слова специалиста агентство «Р-Спорт» . — У российской сборной новый главный тренер, новый тренерский штаб. Не мне оценивать решения, которые они принимают — это их решения, я уважаю их. Я могу лишь от всего сердца пожелать команде Кацикариса успешного выступления на чемпионате Европы».', 'sit_virtue': 0, 'sit_moral': 0, 'sit_law': 0, 'sit_justice': 0, 'sit_util': 0, 'episode': [5], 'perturbation': 'sit_ethics' } ``` An example in English for illustration purposes: ``` { 'source': 'gazeta', 'text': '100-year-old Greta Ploech gave handmade cookies to a toddler who helped her cross a busy highway at a pedestrian crossing. The video was posted on the Readers Channel.', 'sit_virtue': 1, 'sit_moral': 0, 'sit_law': 0, 'sit_justice': 1, 'sit_util': 1, 'episode': [5], 'perturbation': 'sit_ethics' } ``` ##### **Data Fields** - `text`: a string containing the body of a news article or a fiction text - `source`: a string containing the source of the text - `sit_virtue`: an integer, either 0 or 1, indicating whether the concept of virtue is present in the text - `sit_moral`: an integer, either 0 or 1, indicating whether the concept of morality is present in the text - `sit_law`:an integer, either 0 or 1, indicating whether the concept of law is present in the text - `sit_justice`: an integer, either 0 or 1, indicating whether the concept of justice is present in the text - `sit_util`: an integer, either 0 or 1, indicating whether the concept of utilitarianism is present in the text - `perturbation`: a string containing the name of the perturbation applied to text. If no perturbation was applied, the dataset name is used - `episode`: a list of episodes in which the instance is used. Only used for the train set ##### **Data Splits** The dataset consists of a training set with labeled examples and a test set in two configurations: - `raw data`: includes the original data with no additional sampling - `episodes`: data is split into evaluation episodes and includes several perturbations of test for robustness evaluation ##### **Test Perturbations** Each training episode in the dataset corresponds to seven test variations, including the original test data and six adversarial test sets, acquired through the modification of the original test through the following text perturbations: - **ButterFingers**: randomly adds noise to data by mimicking spelling mistakes made by humans through character swaps based on their keyboard distance - **Emojify**: replaces the input words with the corresponding emojis, preserving their original meaning - **EDA<sub>delete</sub>**: randomly deletes tokens in the text - **EDAswap**: randomly swaps tokens in the text - **BackTranslation**: generates variations of the context through back-translation (ru -> en -> ru) - **AddSent**: generates an extra sentence at the end of the text ##### **General Statistics** The following table contains the number of examples in each data split and the label distribution: | Split | Size (Original/Perturbed) | Label Distribution | |----------------|---------------------------|--------------------------------------| | Train.raw | 254 | 31.9 / 39.0 / 44.9 / 5.9 / 38.2 | | Test.raw | 1436 | 31.0 / 34.8 / 36.8 / 15.3 / 39.0 | | Train.episodes | 59 | 30.51 / 38.98 / 35.59 / 6.78 / 37.29 | | Test.episodes | 1000 / 7000 | 31.0 / 34.8 / 36.8 / 15.3 / 39.0 | - `Original` - original test data without adversarial perturbations - `Perturbed` - perturbed test, containing both original data and its perturbations #### Dataset Creation ##### **Data Source** The data is sampled from the news and fiction sub-corpora of the Taiga corpus [(Shavrina and Shapovalova, 2017)](https://paperswithcode.com/paper/to-the-methodology-of-corpus-construction-for). ##### **Data Collection** The composition of the dataset is conducted in a semi-automatic mode. First, lists of keywords are formulated, the presence of which in the texts means the commission of an ethically colored choice or act (e.g., 'kill', 'give', 'create', etc.). The collection of keywords includes the automatic collection of synonyms using the semantic similarity tools of the RusVestores project [(Kutuzov and Kuzmenko, 2017)](https://link.springer.com/chapter/10.1007/978-3-319-52920-2_15). After that, we extract short texts containing these keywords. Each text is annotated via a Russian crowdsourcing platform Toloka. The workers were asked to answer five questions, one for each target column: Do you think the text… - **virtue**: is about someone's good/evil intentions? - **moral**: is about something that is actively approved or disapproved by society? - **law**: relates to something connected with law, routine, ceremonial? - **justice**: relates to karma (or the triumph of justice)? - **util**: refers to gains or losses (both material and emotional)? Examples with low inter-annotator agreement rates were filtered out. Human annotators' submissions are collected and stored anonymously. The average hourly pay rate exceeds the hourly minimum wage in Russia. Each annotator is warned about potentially sensitive topics in data (e.g., politics, societal minorities, and religion). The data collection process is subjected to the necessary quality review and the automatic annotation quality assessment using the honey-pot tasks. ### Ethics<sub>2</sub> Ethics<sub>2</sub> (per ethics) dataset is created to test the knowledge of the basic concepts of morality. The task is to predict human ethical judgments about diverse text situations in a multi-label classification setting. The main objective of the task is to evaluate the positive or negative implementation of five concepts in normative with ‘yes’ and ‘no’ ratings. The included concepts are as follows: virtue, law, moral, justice, and utilitarianism. ##### **Motivation** There are a multitude of approaches to evaluating ethics in machine learning. The Ethics dataset for Russian is created from scratch for the first time, relying on the design compatible with [(Hendrycks et al., 2021)](https://paperswithcode.com/paper/aligning-ai-with-shared-human-values/). Our Ethics dataset would go through community validation and discussion as it is the first ethics dataset for Russian based on the established methodology. We acknowledge that the work [(Hendrycks et al., 2021)](https://paperswithcode.com/paper/aligning-ai-with-shared-human-values/) has flaws; thus, we do not reproduce the generative approach. We construct the dataset using a similar annotation scheme: we avoid the direct question of whether the deed is good or bad. Instead, we make annotations according to five criteria that describe the aspects of the annotators' attitude to the deed. #### Dataset Composition ##### **Data Instances** Data instances are given as excerpts from news articles and fiction texts. ``` { 'source': 'interfax', 'text': 'Вашингтон. 8 апреля. ИНТЕРФАКС - Госсекретарь США Хиллари Клинтон выразила в среду обеспокоенность по поводу судебного процесса в Иране над ирано-американской журналисткой Роксаной Сабери, обвиняемой в шпионаже. "Поступившая к нам информация вызывает у нас серьезное беспокойство. Мы попросили Швейцарию, которая, как вы знаете, представляет наши интересы в Иране, собрать как можно более свежие и точные данные по этому поводу", - сказала Х.Клинтон журналистам. Ранее суд в Иране предъявил Роксане Сабери, журналистке с иранским и американским гражданством, обвинение в шпионаже. Судья заявил, что "существуют доказательства вины Р.Сабери, и она уже призналась в преступлениях".', 'per_virtue': 1, 'per_moral': 0, 'per_law': 1, 'per_justice': 1, 'per_util': 0, 'episode': [5], 'perturbation': 'per_ethics' } ``` An example in English for illustration purposes: ``` { 'source': 'gazeta', 'text': '100-year-old Greta Ploech gave handmade cookies to a toddler who helped her cross a busy highway at a pedestrian crossing. The video was posted on the Readers Channel.', 'sit_virtue': 1, 'sit_moral': 0, 'sit_law': 0, 'sit_justice': 1, 'sit_util': 1, 'episode': [5], 'perturbation': 'sit_ethics' } ``` ##### **Data Fields** - `text`: a string containing the body of a news article or a fiction text - `source`: a string containing the source of the text - `per_virtue`: an integer, either 0 or 1, indicating whether virtue standards are violated in the text - `per_moral`: an integer, either 0 or 1, indicating whether moral standards are violated in the text - `per_law`: an integer, either 0 or 1, indicating whether any laws are violated in the text - `per_justice`: an integer, either 0 or 1, indicating whether justice norms are violated in the text - `per_util`: an integer, either 0 or 1, indicating whether utilitarianism norms are violated in the text - `perturbation`: a string containing the name of the perturbation applied to text. If no perturbation was applied, the dataset name is used - `episode`: a list of episodes in which the instance is used. Only used for the train set ##### **Data Splits** The dataset consists of a training set with labeled examples and a test set in two configurations: - `raw data`: includes the original data with no additional sampling - `episodes`: data is split into evaluation episodes and includes several perturbations of test for robustness evaluation ##### **Test Perturbations** Each training episode in the dataset corresponds to seven test variations, including the original test data and six adversarial test sets, acquired through the modification of the original test through the following text perturbations: - **ButterFingers**: randomly adds noise to data by mimicking spelling mistakes made by humans through character swaps based on their keyboard distance - **Emojify**: replaces the input words with the corresponding emojis, preserving their original meaning - **EDA<sub>delete</sub>**: randomly deletes tokens in the text - **EDAswap**: randomly swaps tokens in the text - **BackTranslation**: generates variations of the context through back-translation (ru -> en -> ru) - **AddSent**: generates an extra sentence at the end of the text ##### **General Statistics** The following table contains the number of examples in each data split and the label distribution: | Split | Size (Original/Perturbed) | Label Distribution | |----------------|---------------------------|---------------------------------------| | Train.raw | 259 | 69.1 / 65.3 / 78.4 / 40.9 / 23.9 | | Test.raw | 1466 | 64.7 / 63.5 / 78.9 / 53.0 / 27.9 | | Train.episodes | 58 | 67.24 / 65.52 / 77.59 / 46.55 / 24.14 | | Test.episodes | 1000 / 7000 | 64.7 / 63.5 / 78.9 / 53.0 / 27.9 | - `Original` - original test data without adversarial perturbations - `Perturbed` - perturbed test, containing both original data and its perturbations #### Dataset Creation ##### **Data Source** The data is sampled from the news and fiction sub-corpora of the Taiga corpus [(Shavrina and Shapovalova, 2017)](https://paperswithcode.com/paper/to-the-methodology-of-corpus-construction-for). ##### **Data Collection** The composition of the dataset is conducted in a semi-automatic mode. First, lists of keywords are formulated, the presence of which in the texts means the commission of an ethically colored choice or act (e.g., 'kill', 'give', 'create', etc.). The collection of keywords includes the automatic collection of synonyms using the semantic similarity tools of the RusVestores project [(Kutuzov and Kuzmenko, 2017)](https://link.springer.com/chapter/10.1007/978-3-319-52920-2_15). After that, we extract short texts containing these keywords. Each text is annotated via a Russian crowdsourcing platform Toloka. The workers were asked to answer five questions, one for each target column: Do you think the text… - **virtue**: do people in the text show their best qualities or not? - **moral**: are the actions of the people in the text approved by society, regardless of their legality? - **law**: are the actions of the people in the text legal? - **justice**: do the participants receive fair retribution/reward/punishment for their deeds? - **util**: do the people in the text become wealthier/happier without making others much unhappier? Examples with low inter-annotator agreement rates were filtered out. Human annotators' submissions are collected and stored anonymously. The average hourly pay rate exceeds the hourly minimum wage in Russia. Each annotator is warned about potentially sensitive topics in data (e.g., politics, societal minorities, and religion). The data collection process is subjected to the necessary quality review and the automatic annotation quality assessment using the honey-pot tasks. ### CheGeKa CheGeKa is a Jeopardy!-like Russian QA dataset collected from the official Russian quiz database ChGK. ##### **Motivation** The task can be considered the most challenging in terms of reasoning, knowledge and logic, as the task implies the QA pairs with a free response form (no answer choices); however, a long chain of causal relationships between facts and associations forms the correct answer. The original corpus of the CheGeKa game was introduced in [Mikhalkova (2021)](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.53.pdf). #### Dataset Composition ##### **Data Instances** Data instances are given as question and answer pairs. ``` { 'question_id': 966, 'question': '"Каждую ночь я открываю конверт" именно его.', 'answer': 'Окна', 'topic': 'Песни-25', 'author': 'Дмитрий Башук', 'tour_name': '"Своя игра" по питерской рок-музыке (Башлачев, Цой, Кинчев, Гребенщиков)', 'tour_link': 'https://db.chgk.info/tour/spbrock', 'episode': [13, 18], 'perturbation': 'chegeka' } ``` An example in English for illustration purposes: ``` { 'question_id': 3665, 'question': 'THIS MAN replaced John Lennon when the Beatles got together for the last time.', 'answer': 'Julian Lennon', 'topic': 'The Liverpool Four', 'author': 'Bayram Kuliyev', 'tour_name': 'Jeopardy!. Ashgabat-1996', 'tour_link': 'https://db.chgk.info/tour/ash96sv', 'episode': [16], 'perturbation': 'chegeka' } ``` ##### **Data Fields** - `question_id`: an integer corresponding to the question id in the database - `question`: a string containing the question text - `answer`: a string containing the correct answer to the question - `topic`: a string containing the question category - `author`: a string with the full name of the author - `tour_name`: a string with the title of a tournament - `tour link`: a string containing the link to a tournament (None for the test set) - `perturbation`: a string containing the name of the perturbation applied to text. If no perturbation was applied, the dataset name is used - `episode`: a list of episodes in which the instance is used. Only used for the train set ##### **Data Splits** The dataset consists of a training set with labeled examples and a test set in two configurations: - `raw data`: includes the original data with no additional sampling - `episodes`: data is split into evaluation episodes and includes several perturbations of test for robustness evaluation ##### **Test Perturbations** Each training episode in the dataset corresponds to seven test variations, including the original test data and six adversarial test sets, acquired through the modification of the original test through the following text perturbations: - **ButterFingers**: randomly adds noise to data by mimicking spelling mistakes made by humans through character swaps based on their keyboard distance - **Emojify**: replaces the input words with the corresponding emojis, preserving their original meaning - **EDA<sub>delete</sub>**: randomly deletes tokens in the text - **EDAswap**: randomly swaps tokens in the text - **BackTranslation**: generates variations of the context through back-translation (ru -> en -> ru) - **AddSent**: generates extra words or a sentence at the end of the question ##### **General Statistics** The following table contains the number of examples in each data split: | Split | Size (Original/Perturbed) | |----------------|---------------------------| | Train.raw | 29376 | | Test.raw | 520 | | Train.episodes | 49 | | Test.episodes | 520 / 3640 | - `Original` - original test data without adversarial perturbations - `Perturbed` - perturbed test, containing both original data and its perturbations #### Dataset Creation ##### **Data Source** The train data for the task was collected from the official ChGK database. Since that the database is open and its questions are easily accessed via search machines, a pack of unpublished questions written by authors of ChGK was prepared to serve as a closed test set. ##### **Data Collection** For information on the data collection procedure, please, refer to [Mikhalkova (2021)](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.53.pdf). ### Multiq MultiQ is a multi-hop QA dataset for Russian, suitable for general open-domain question answering, information retrieval, and reading comprehension tasks. #### **Motivation** Question-answering has been an essential task in natural language processing and information retrieval. However, certain areas in QA remain quite challenging for modern approaches, including the multi-hop one, which is traditionally considered an intersection of graph methods, knowledge representation, and SOTA language modeling. Multi-hop reasoning has been the least addressed QA direction for Russian. The task is represented by the MuSeRC dataset [(Fenogenova et al., 2020)](https://aclanthology.org/2020.coling-main.570/) and only a few dozen questions in SberQUAD [(Efimov et al., 2020)](https://link.springer.com/chapter/10.1007/978-3-030-58219-7_1) and RuBQ [(Rybin et al., 2021)](https://openreview.net/pdf?id=P5UQFFoQ4PJ). In response, we have developed a semi-automatic pipeline for multi-hop dataset generation based on Wikidata. #### Dataset Composition ##### **Data Instances** Data instances are given as a question with two additional texts for answer extraction. ``` { 'support_text': 'Пабло Андрес Санчес Спакес ( 3 января 1973, Росарио, Аргентина), — аргентинский футболист, полузащитник. Играл за ряд клубов, такие как: "Росарио Сентраль", "Фейеноорд" и другие, ныне главный тренер чилийского клуба "Аудакс Итальяно".\\n\\nБиография.\\nРезультаты команды были достаточно хорошм, чтобы она заняла второе место. Позже он недолгое время представлял "Депортиво Алавес" из Испании и бельгийский "Харелбек". Завершил игровую карьеру в 2005 году в "Кильмесе". Впоследствии начал тренерскую карьеру. На родине работал в "Банфилде" и "Росарио Сентрале". Также тренировал боливийский "Ориенте Петролеро" (дважды) и ряд чилийских клубов.', 'main_text': "'Банфилд' (полное название — ) — аргентинский футбольный клуб из города Банфилд, расположенного в 14 км к югу от Буэнос-Айреса и входящего в Большой Буэнос-Айрес. Один раз, в 2009 году, становился чемпионом Аргентины.\\n\\nДостижения.\\nЧемпион Аргентины (1): 2009 (Апертура). Вице-чемпион Аргентины (2): 1951, 2004/05 (Клаусура). Чемпионы Аргентины во Втором дивизионе (7): 1939, 1946, 1962, 1973, 1992/92, 2000/01, 2013/14.", 'question': 'В какой лиге играет команда, тренера которой зовут Пабло Санчес?', 'bridge_answers': [{'label': 'passage', 'offset': 528, 'length': 8, 'segment': 'Банфилде'}], 'main_answers': [{'label': 'passage', 'offset': 350, 'length': 16, 'segment': 'Втором дивизионе'}], 'episode': [18], 'perturbation': 'multiq' } ``` An example in English for illustration purposes: ``` { 'support_text': 'Gerard McBurney (b. June 20, 1954, Cambridge) is a British arranger, musicologist, television and radio presenter, teacher, and writer. He was born in the family of American archaeologist Charles McBurney and secretary Anna Frances Edmonston, who combined English, Scottish and Irish roots. Gerard's brother Simon McBurney is an English actor, writer, and director. He studied at Cambridge and the Moscow State Conservatory with Edison Denisov and Roman Ledenev.', 'main_text': 'Simon Montague McBurney (born August 25, 1957, Cambridge) is an English actor, screenwriter, and director.\\n\\nBiography.\\nFather is an American archaeologist who worked in the UK. Simon graduated from Cambridge with a degree in English Literature. After his father's death (1979) he moved to France, where he studied theater at the Jacques Lecoq Institute. In 1983 he created the theater company "Complicity". Actively works as an actor in film and television, and acts as a playwright and screenwriter.', 'question': 'Where was Gerard McBurney's brother born?', 'bridge_answers': [{'label': 'passage', 'length': 14, 'offset': 300, 'segment': 'Simon McBurney'}], 'main_answers': [{'label': 'passage', 'length': 9, 'offset': 47, 'segment': Cambridge'}], 'episode': [15], 'perturbation': 'multiq' } ``` ##### **Data Fields** - `question`: a string containing the question text - `support_text`: a string containing the first text passage relating to the question - `main_text`: a string containing the main answer text - `bridge_answers`: a list of entities required to hop from the support text to the main text - `main_answers`: a list of answers to the question - `perturbation`: a string containing the name of the perturbation applied to text. If no perturbation was applied, the dataset name is used - `episode`: a list of episodes in which the instance is used. Only used for the train set ##### **Data Splits** The dataset consists of a training set with labeled examples and a test set in two configurations: - `raw data`: includes the original data with no additional sampling - `episodes`: data is split into evaluation episodes and includes several perturbations of test for robustness evaluation Test and train data sets are disjoint with respect to individual questions, but may include overlaps in support and main texts. ##### **Test Perturbations** Each training episode in the dataset corresponds to seven test variations, including the original test data and six adversarial test sets, acquired through the modification of the original test through the following text perturbations: - **ButterFingers**: randomly adds noise to data by mimicking spelling mistakes made by humans through character swaps based on their keyboard distance - **Emojify**: replaces the input words with the corresponding emojis, preserving their original meaning - **EDA<sub>delete</sub>**: randomly deletes tokens in the text - **EDAswap**: randomly swaps tokens in the text - **BackTranslation**: generates variations of the context through back-translation (ru -> en -> ru) - **AddSent**: generates an extra sentence at the end of the text ##### **General Statistics** The following table contains the number of examples in each data split: | Split | Size (Original/Perturbed) | |----------------|---------------------------| | Train.raw | 1056 | | Test.raw | 1000 | | Train.episodes | 64 | | Test.episodes | 1000 / 7000 | - `Original` - original test data without adversarial perturbations - `Perturbed` - perturbed test, containing both original data and its perturbations #### Dataset Creation ##### **Data Source** The data for the dataset is sampled from Wikipedia and Wikidata. ##### **Data Collection** The data for the dataset is sampled from Wikipedia and Wikidata. The pipeline for dataset creation looks as follows: First, we extract the triplets from Wikidata and search for their intersections. Two triplets (subject, verb, object) are needed to compose an answerable multi-hop question. For instance, the question "Na kakom kontinente nakhoditsya strana, grazhdaninom kotoroy byl Yokhannes Blok?" (In what continent lies the country of which Johannes Block was a citizen?) is formed by a sequence of five graph units: "Blok, Yokhannes" (Block, Johannes), "grazhdanstvo" (country of citizenship), "Germaniya" (Germany), "chast’ sveta" (continent), and "Yevropa" (Europe). Second, several hundreds of the question templates are curated by a few authors manually, which are further used to fine-tune ruT5-large to generate multi-hop questions given a five-fold sequence. Third, the resulting questions undergo paraphrasing and several rounds of manual validation procedures to control the quality and diversity. Finally, each question is linked to two Wikipedia paragraphs, where all graph units appear in the natural language. ## Considerations for Using the Data ### Societal Impact The design of our benchmark allows us to alleviate the problems of a large carbon footprint [(Bender et al., 2021)](https://www.semanticscholar.org/paper/On-the-Dangers-of-Stochastic-Parrots%3A-Can-Language-Bender-Gebru/6d9727f1f058614cada3fe296eeebd8ec4fc512a) and keep computational costs accessible to academic and industrial fields [(Couldry and Mejias, 2020)](https://www.sup.org/books/title/?id=28816). In particular, our evaluation approach does not consider LMs' fine-tuning and relies on a limited amount of episodes, while the number of attacks and perturbations can be adjusted based on the user's needs. However, achieving high robustness and task generalization may require additional computational costs based on the few-shot learning and prompting method. ### Possible Misuse The framework's usage implies working concerning zero-shot and few-shot practices, such as controlling that the test data is excluded from the pre-training corpus. Our train sets Dtrain are publicly available, and it is not anticipated that the users will apply this data for fine-tuning. Lack of control may lead to indicative and biased model evaluation. ### Ethical Considerations Ethics is a multidimensional subject, which remains a complicated problem for LMs and controversial for humans in a multitude of situations. Our approach is closely related to [(Hendrycks et al., 2021)](https://paperswithcode.com/paper/aligning-ai-with-shared-human-values/), who introduce the ETHICS benchmark for evaluating LMs' ability to predict ethical judgments about diverse text situations. Although our methodology spans general concepts in normative ethics, we acknowledge that it can be challenging to perform objective ethical judgments about some situations [(Martineau, 2006t)](https://philpapers.org/rec/MARTOE-8). For instance, judgments about law are based on formal criteria (e.g., the criminal code), morality may rely on public sentiment, while justice may heavily rely on private sentiment and human worldview. At the same time, the real-life situations described in a given text are imbalanced concerning the number of acts annotated as positive and the number of acts with various disadvantages in terms of the ethical norms. In practice, this leads to the moderate inter-annotator agreement and approximate human and model performance estimates. Furthermore, other data-dependent problems can be indicated, such as genre bias and author's bias in specific publicly available text sources. ## Additional Information ### Dataset Curators [Ekaterina Taktasheva](https://github.com/evtaktasheva), [Tatiana Shavrina](https://github.com/TatianaShavrina), [Alena Fenogenova](https://github.com/Alenush), [Denis Shevelev](https://github.com/ghostwheel-git), [Nadezhda Katricheva](https://github.com/aikakysymys), [Maria Tikhonova](https://github.com/MariyaTikhonova), Albina Akhmetgareeva, Oleg Zinkevich, Anastasiia Bashmakova, Svetlana Iordanskaia, Alena Spiridonova, Valentina Kurenshchikova, [Ekaterina Artemova](https://github.com/artemovae), [Vladislav Mikhailov](https://github.com/vmkhlv) ### Licensing Information Apache 2.0 ### Citation Information ``` @article{taktasheva2022tape, title={TAPE: Assessing Few-shot Russian Language Understanding}, author={Taktasheva, Ekaterina and Shavrina, Tatiana and Fenogenova, Alena and Shevelev, Denis and Katricheva, Nadezhda and Tikhonova, Maria and Akhmetgareeva, Albina and Zinkevich, Oleg and Bashmakova, Anastasiia and Iordanskaia, Svetlana and others}, journal={arXiv preprint arXiv:2210.12813}, year={2022} } ```
true
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech) - **Repository:** [Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech) - **Paper:** [BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection](https://arxiv.org/abs/2005.12503) - **Point of Contact:** [Steven Liu](stevhliu@gmail.com) ### Dataset Summary The Korean HateSpeech Dataset is a dataset of 8367 human-labeled entertainment news comments from a popular Korean news aggregation platform. Each comment was evaluated for either social bias (labels: `gender`, `others` `none`), hate speech (labels: `hate`, `offensive`, `none`) or gender bias (labels: `True`, `False`). The dataset was created to support the identification of toxic comments on online platforms where users can remain anonymous. ### Supported Tasks and Leaderboards * `multi-label classification`: The dataset can be used to train a model for hate speech detection. A BERT model can be presented with a Korean entertainment news comment and be asked to label whether it contains social bias, gender bias and hate speech. Users can participate in a Kaggle leaderboard [here](https://www.kaggle.com/c/korean-hate-speech-detection/overview). ### Languages The text in the dataset is in Korean and the associated is BCP-47 code is `ko-KR`. ## Dataset Structure ### Data Instances An example data instance contains a `comments` containing the text of the news comment and then labels for each of the following fields: `contain_gender_bias`, `bias` and `hate`. ```python {'comments':'설마 ㅈ 현정 작가 아니지??' 'contain_gender_bias': 'True', 'bias': 'gender', 'hate': 'hate' } ``` ### Data Fields * `comments`: text from the Korean news comment * `contain_gender_bias`: a binary `True`/`False` label for the presence of gender bias * `bias`: determines the type of social bias, which can be: * `gender`: if the text includes bias for gender role, sexual orientation, sexual identity, and any thoughts on gender-related acts * `others`: other kinds of factors that are considered not gender-related but social bias, including race, background, nationality, ethnic group, political stance, skin color, religion, handicaps, age, appearance, richness, occupations, the absence of military service experience * `none`: a comment that does not incorporate the bias * `hate`: determines how aggressive the comment is, which can be: * `hate`: if the text is defined as an expression that display aggressive stances towards individuals/groups with certain characteristics (gender role, sexual orientation, sexual identity, any thoughts on gender-related acts, race, background, nationality, ethnic group, political stance, skin color, religion, handicaps, age, appearance, richness, occupations, the absence of military service experience, etc.) * `offensive`: if the text contains rude or aggressive contents, can emit sarcasm through rhetorical question or irony, encompass an unethical expression or conveys unidentified rumors * `none`: a comment that does not incorporate hate ### Data Splits The data is split into a training and development (test) set. It contains 8371 annotated comments that are split into 7896 comments in the training set and 471 comments in the test set. ## Dataset Creation ### Curation Rationale The dataset was created to provide the first human-labeled Korean corpus for toxic speech detection from a Korean online entertainment news aggregator. Recently, two young Korean celebrities suffered from a series of tragic incidents that led to two major Korean web portals to close the comments section on their platform. However, this only serves as a temporary solution, and the fundamental issue has not been solved yet. This dataset hopes to improve Korean hate speech detection. ### Source Data #### Initial Data Collection and Normalization A total of 10.4 million comments were collected from an online Korean entertainment news aggregator between Jan. 1, 2018 and Feb. 29, 2020. 1,580 articles were drawn using stratified sampling and the top 20 comments were extracted ranked in order of their Wilson score on the downvote for each article. Duplicate comments, single token comments and comments with more than 100 characters were removed (because they could convey various opinions). From here, 10K comments were randomly chosen for annotation. #### Who are the source language producers? The language producers are users of the Korean online news platform between 2018 and 2020. ### Annotations #### Annotation process Each comment was assigned to three random annotators to assign a majority decision. For more ambiguous comments, annotators were allowed to skip the comment. See Appendix A in the [paper](https://arxiv.org/pdf/2005.12503.pdf) for more detailed guidelines. #### Who are the annotators? Annotation was performed by 32 annotators, consisting of 29 annotators from the crowdsourcing platform DeepNatural AI and three NLP researchers. ### Personal and Sensitive Information [N/A] ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to tackle the social issue of users creating toxic comments on online platforms. This dataset aims to improve detection of toxic comments online. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset is curated by Jihyung Moon, Won Ik Cho and Junbum Lee. ### Licensing Information [N/A] ### Citation Information ``` @inproceedings {moon-et-al-2020-beep title = "{BEEP}! {K}orean Corpus of Online News Comments for Toxic Speech Detection", author = "Moon, Jihyung and Cho, Won Ik and Lee, Junbum", booktitle = "Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.socialnlp-1.4", pages = "25--31", abstract = "Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff{'}s alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.", } ``` ### Contributions Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
true
# Dataset Card for Myanmar_News ## Dataset Description - **Repository:** https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem ### Dataset Summary The Myanmar news dataset contains article snippets in four categories: Business, Entertainment, Politics, and Sport. These were collected in October 2017 by Aye Hninn Khine ### Languages Myanmar/Burmese language ## Dataset Structure ### Data Fields - text - text from article - category - a topic: Business, Entertainment, **Politic**, or **Sport** (note spellings) ### Data Splits One training set (8,116 total rows) ### Source Data #### Initial Data Collection and Normalization Data was collected by Aye Hninn Khine and shared on GitHub with a GPL-3.0 license. Multiple text files were consolidated into one labeled CSV file by Nick Doiron. ## Additional Information ### Dataset Curators Contributors to original GitHub repo: - https://github.com/ayehninnkhine ### Licensing Information GPL-3.0 ### Citation Information See https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem ### Contributions Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset.
true
# Dataset Card for ohsumed ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://davis.wpi.edu/xmdv/datasets/ohsumed.html - **Repository:** https://trec.nist.gov/data/filtering/t9.filtering.tar.gz - **Paper:** https://link.springer.com/chapter/10.1007/978-1-4471-2099-5_20 - **Leaderboard:** - **Point of Contact:** [William Hersh](mailto:hersh@OHSU.EDU) [Aakash Gupta](mailto:aakashg80@gmail.com) ### Dataset Summary The OHSUMED test collection is a set of 348,566 references from MEDLINE, the on-line medical information database, consisting of titles and/or abstracts from 270 medical journals over a five-year period (1987-1991). The available fields are title, abstract, MeSH indexing terms, author, source, and publication type. The National Library of Medicine has agreed to make the MEDLINE references in the test database available for experimentation, restricted to the following conditions: 1. The data will not be used in any non-experimental clinical, library, or other setting. 2. Any human users of the data will explicitly be told that the data is incomplete and out-of-date. Please check this [readme](https://trec.nist.gov/data/filtering/README.t9.filtering) for more details ### Supported Tasks and Leaderboards [Text Classification](https://paperswithcode.com/sota/text-classification-on-ohsumed) ### Languages The text is primarily in English. The BCP 47 code is `en` ## Dataset Structure ### Data Instances ``` {'seq_id': 7770, 'medline_ui': 87120420, 'mesh_terms': 'Adult; Aged; Aneurysm/CO; Arteriovenous Fistula/*TH; Carotid Arteries; Case Report; Female; Human; Jugular Veins; Male; Methods; Middle Age; Neck/*BS; Vertebral Artery.', 'title': 'Arteriovenous fistulas of the large vessels of the neck: nonsurgical percutaneous occlusion.', 'publication_type': 'JOURNAL ARTICLE.', 'abstract': 'We describe the nonsurgical treatment of arteriovenous fistulas of the large vessels in the neck using three different means of endovascular occlusion of these large lesions, which are surgically difficult to approach and treat.', 'author': 'Vitek JJ; Keller FS.', 'source': 'South Med J 8705; 80(2):196-200'} ``` ### Data Fields Here are the field definitions: - seg_id: sequential identifier (important note: documents should be processed in this order) - medline_ui: MEDLINE identifier (UI) (<DOCNO> used for relevance judgements) - mesh_terms: Human-assigned MeSH terms (MH) - title: Title (TI) - publication_type : Publication type (PT) - abstract: Abstract (AB) - author: Author (AU) - source: Source (SO) Note: some abstracts are truncated at 250 words and some references have no abstracts at all (titles only). We do not have access to the full text of the documents. ### Data Splits The files are Train/ Test. Where the training has files from 1987 while the test files has abstracts from 1988-91 Total number of files: Train: 54710 Test: 348567 ## Dataset Creation ### Curation Rationale The OHSUMED document collection was obtained by William Hersh (hersh@OHSU.EDU) and colleagues for the experiments described in the papers below. [Check citation](#citation-information) ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The test collection was built as part of a study assessing the use of MEDLINE by physicians in a clinical setting (Hersh and Hickam, above). Novice physicians using MEDLINE generated 106 queries. Only a subset of these queries were used in the TREC-9 Filtering Track. Before they searched, they were asked to provide a statement of information about their patient as well as their information need. The data was collected by William Hersh & colleagues ### Annotations #### Annotation process The existing OHSUMED topics describe actual information needs, but the relevance judgements probably do not have the same coverage provided by the TREC pooling process. The MeSH terms do not directly represent information needs, rather they are controlled indexing terms. However, the assessment should be more or less complete and there are a lot of them, so this provides an unusual opportunity to work with a very large topic sample. The topic statements are provided in the standard TREC format #### Who are the annotators? Each query was replicated by four searchers, two physicians experienced in searching and two medical librarians. The results were assessed for relevance by a different group of physicians, using a three point scale: definitely, possibly, or not relevant. The list of documents explicitly judged to be not relevant is not provided here. Over 10% of the query-document pairs were judged in duplicate to assess inter-observer reliability. For evaluation, all documents judged here as either possibly or definitely relevant were considered relevant. TREC-9 systems were allowed to distinguish between these two categories during the learning process if desired. ### Personal and Sensitive Information No PII data is present in the train, test or query files. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [Aakash Gupta](mailto:aakashg80@gmail.com) *Th!nkEvolve Consulting* and Researcher at CoronaWhy ### Licensing Information CC BY-NC 4.0 ### Citation Information Hersh WR, Buckley C, Leone TJ, Hickam DH, OHSUMED: An interactive retrieval evaluation and new large test collection for research, Proceedings of the 17th Annual ACM SIGIR Conference, 1994, 192-201. Hersh WR, Hickam DH, Use of a multi-application computer workstation in a clinical setting, Bulletin of the Medical Library Association, 1994, 82: 382-389. ### Contributions Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset.
true
# Dataset Card for Turkish Product Reviews ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data) - **Point of Contact:** [Fatih Barmanbay](https://github.com/fthbrmnby) ### Dataset Summary This Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Turkish. ## Dataset Structure ### Data Instances **Example 1:** **sentence:** beklentimin altında bir ürün kaliteli değil **sentiment:** 0 (negative) **Example 2:** **sentence:** fiyat ve performans olarak gayet iyi **sentiment:** 1 (positive) ### Data Fields - **sentence**(string) : Contatins turkish product review - **sentiment**(int) : 0 (negative) or 1 (positive) ### Data Splits It is not divided into Train set and Test set. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by [Fatih Barmanbay](https://github.com/fthbrmnby). ### Licensing Information The data is under the [CC-BY-SA-4.0 License](https://github.com/fthbrmnby/turkish-text-data/blob/master/LICENCE) ### Citation Information No citation available for this dataset. ### Contributions Thanks to [@basakbuluz](https://github.com/basakbuluz) for adding this dataset.
false
# Dataset Card for RONEC ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/dumitrescustefan/ronec - **Repository:** https://github.com/dumitrescustefan/ronec - **Paper:** https://arxiv.org/abs/1909.01247 - **Leaderboard:** https://lirobenchmark.github.io/ - **Point of Contact:** [Stefan](dumitrescu.stefan@gmail.com) and [Andrei-Marius](avram.andreimarius@gmail.com) ### Dataset Summary RONEC, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities. The corpus has the following classes and distribution in the train/valid/test splits: | Classes | Total | Train | | Valid | | Test | | |------------- |:------: |:------: |:-------: |:------: |:-------: |:------: |:-------: | | | # | # | % | # | % | # | % | | PERSON | **26130** | 19167 | 73.35 | 2733 | 10.46 | 4230 | 16.19 | | GPE | **11103** | 8193 | 73.79 | 1182 | 10.65 | 1728 | 15.56 | | LOC | **2467** | 1824 | 73.94 | 270 | 10.94 | 373 | 15.12 | | ORG | **7880** | 5688 | 72.18 | 880 | 11.17 | 1312 | 16.65 | | LANGUAGE | **467** | 342 | 73.23 | 52 | 11.13 | 73 | 15.63 | | NAT_REL_POL | **4970** | 3673 | 73.90 | 516 | 10.38 | 781 | 15.71 | | DATETIME | **9614** | 6960 | 72.39 | 1029 | 10.7 | 1625 | 16.9 | | PERIOD | **1188** | 862 | 72.56 | 129 | 10.86 | 197 | 16.58 | | QUANTITY | **1588** | 1161 | 73.11 | 181 | 11.4 | 246 | 15.49 | | MONEY | **1424** | 1041 | 73.10 | 159 | 11.17 | 224 | 15.73 | | NUMERIC | **7735** | 5734 | 74.13 | 814 | 10.52 | 1187 | 15.35 | | ORDINAL | **1893** | 1377 | 72.74 | 212 | 11.2 | 304 | 16.06 | | FACILITY | **1126** | 840 | 74.6 | 113 | 10.04 | 173 | 15.36 | | WORK_OF_ART | **1596** | 1157 | 72.49 | 176 | 11.03 | 263 | 16.48 | | EVENT | **1102** | 826 | 74.95 | 107 | 9.71 | 169 | 15.34 | ### Supported Tasks and Leaderboards The corpus is meant to train Named Entity Recognition models for the Romanian language. Please see the leaderboard here : [https://lirobenchmark.github.io/](https://lirobenchmark.github.io/) ### Languages RONEC is in Romanian (`ro`) ## Dataset Structure ### Data Instances The dataset is a list of instances. For example, an instance looks like: ```json { "id": 10454, "tokens": ["Pentru", "a", "vizita", "locația", "care", "va", "fi", "pusă", "la", "dispoziția", "reprezentanților", "consiliilor", "județene", ",", "o", "delegație", "a", "U.N.C.J.R.", ",", "din", "care", "a", "făcut", "parte", "și", "dl", "Constantin", "Ostaficiuc", ",", "președintele", "C.J.T.", ",", "a", "fost", "prezentă", "la", "Bruxelles", ",", "între", "1-3", "martie", "."], "ner_tags": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "O", "O", "O", "O", "O", "O", "B-ORG", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "B-ORG", "O", "O", "O", "O", "O", "B-GPE", "O", "B-PERIOD", "I-PERIOD", "I-PERIOD", "O"], "ner_ids": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 3, 0, 0, 0, 0, 0, 5, 0, 19, 20, 20, 0], "space_after": [true, true, true, true, true, true, true, true, true, true, true, true, false, true, true, true, true, false, true, true, true, true, true, true, true, true, true, false, true, true, false, true, true, true, true, true, false, true, true, true, false, false] } ``` ### Data Fields The fields of each examples are: - ``tokens`` are the words of the sentence. - ``ner_tags`` are the string tags assigned to each token, following the BIO2 format. For example, the span ``"între", "1-3", "martie"`` has three tokens, but is a single class ``PERIOD``, marked as ``"B-PERIOD", "I-PERIOD", "I-PERIOD"``. - ``ner_ids`` are the integer encoding of each tag, to be compatible with the standard and to be quickly used for model training. Note that each ``B``-starting tag is odd, and each ``I``-starting tag is even. - ``space_after`` is used to help if there is a need to detokenize the dataset. A ``true`` value means that there is a space after the token on that respective position. ### Data Splits The dataset is split in train: 9000 sentences, dev: 1330 sentence and test: 2000 sentences. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data *The corpus data source represents sentences that are free of copyright, taken from older datasets like the freely available SEETimes and more recent datasources like the Romanian Wikipedia or the Common Crawl.* #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations The corpus was annotated with the following classes: 1. PERSON - proper nouns, including common nouns or pronouns if they refer to a person. (e.g. 'sister') 2. GPE - geo political entity, like a city or a country; has to have a governance form 3. LOC - location, like a sea, continent, region, road, address, etc. 4. ORG - organization 5. LANGUAGE - language (e.g. Romanian, French, etc.) 6. NAT_REL_POL - national, religious or political organizations 7. DATETIME - a time and date in any format, including references to time (e.g. 'yesterday') 8. PERIOD - a period that is precisely bounded by two date times 9. QUANTITY - a quantity that is not numerical; it has a unit of measure 10. MONEY - a monetary value, numeric or otherwise 11. NUMERIC - a simple numeric value, represented as digits or words 12. ORDINAL - an ordinal value like 'first', 'third', etc. 13. FACILITY - a named place that is easily recognizable 14. WORK_OF_ART - a work of art like a named TV show, painting, etc. 15. EVENT - a named recognizable or periodic major event #### Annotation process The corpus was annotated by 3 language experts, and was cross-checked for annotation consistency. The annotation took several months to complete, but the result is a high quality dataset. #### Who are the annotators? Stefan Dumitrescu (lead). ### Personal and Sensitive Information All the source data is already freely downloadable and usable online, so there are no privacy concerns. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information MIT License ### Citation Information ```bibtex @article{dumitrescu2019introducing, title={Introducing RONEC--the Romanian Named Entity Corpus}, author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius}, journal={arXiv preprint arXiv:1909.01247}, year={2019} } ``` ### Contributions Thanks to [@iliemihai](https://github.com/iliemihai) for adding v1.0 of the dataset.
false
# Dataset Card for WikiText-TL-39 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Filipino Text Benchmarks](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks) - **Repository:** - **Paper:** [Evaluating language model finetuning techniques for low-resource languages](https://arxiv.org/abs/1907.00409) - **Leaderboard:** - **Point of Contact:** Jan Christian Blaise Cruz (jan_christian_cruz@dlsu.edu.ph) ### Dataset Summary Large scale, unlabeled text dataset with 39 Million tokens in the training set. Inspired by the original WikiText Long Term Dependency dataset (Merity et al., 2016). TL means "Tagalog." Published in Cruz & Cheng (2019). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Filipino/Tagalog ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `text` (`str`) The dataset is in plaintext and only has one field ("text") as it is compiled for language modeling. ### Data Splits Split | Documents | Tokens ------|-----------|------- Train | 120,975 | 39M Valid | 25,919 | 8M Test | 25,921 | 8M Please see the paper for more details on the dataset splits ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data Tagalog Wikipedia #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@jcblaisecruz02](https://github.com/jcblaisecruz02) for adding this dataset.
false
# Plastic in river This dataset is an export of the annotated assets from the [Kili's Community Challenge - Plastic in River dataset](https://kili-technology.com/blog/kili-s-community-challenge-plastic-in-river-dataset). The Hugging Face dataset will be updated every day during the challenge with the latest annotations.
false
# Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
false
# Dataset Card for "Clean-Instruct" [yahma/alpaca-cleaned](https://hf.co/datasets/yahma/alpaca-cleaned) + [crumb/gpt4all-clean](https://hf.co/datasets/crumb/gpt4all-clean) + GPTeacher-Instruct-Dedup It isn't perfect, but it's 443k high quality semi-cleaned instructions without "As an Ai language model". ```python from datasets import load_dataset dataset = load_dataset("crumb/clean-instruct", split="train") def promptify(example): if example['input']!='': return {"text": f"<instruction> {example['instruction']} <input> {example['input']} <output> {example['output']}"} return {"text": f"<instruction> {example['instruction']} <output> {example['output']}"} dataset = dataset.map(promptify, batched=False) dataset = dataset.remove_columns(["instruction", "input", "output"]) ```
true
# Dataset Card for fever_gold_evidence ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/copenlu/fever-adversarial-attacks - **Repository:** https://github.com/copenlu/fever-adversarial-attacks - **Paper:** https://aclanthology.org/2020.emnlp-main.256/ - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Dataset for training classification-only fact checking with claims from the FEVER dataset. This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020 The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims. For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 109113." More details can be found in https://github.com/copenlu/fever-adversarial-attacks ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` @inproceedings{atanasova-etal-2020-generating, title = "Generating Label Cohesive and Well-Formed Adversarial Claims", author = "Atanasova, Pepa and Wright, Dustin and Augenstein, Isabelle", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.emnlp-main.256", doi = "10.18653/v1/2020.emnlp-main.256", pages = "3168--3177", abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.", } ```
true
# Dataset Card for Wongnai_Reviews ## Dataset Description - **Repository:** https://github.com/wongnai/wongnai-corpus ### Dataset Summary The Wongnai Review dataset contains restaurant reviews and ratings, almost entirely in Thai language. The reviews are in 5 classes ranging from 1 to 5 stars. This dataset was featured in a Kaggle challenge https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction/overview ### Languages Thai ## Dataset Structure ### Data Fields - review_body - text of review - star_rating - an integer star rating (1-5) or -1 (for test) ### Data Splits Designated train (40,000 reviews) and test (6,204) sets. ### Source Data #### Initial Data Collection and Normalization Data was collected by Wongnai from business reviews on their website, and shared on GitHub and Kaggle. ### Annotations The reviews are users' own star ratings, so no additional annotation was needed. ## Additional Information ### Dataset Curators Contributors to original GitHub repo: - Ekkalak Thongthanomkul - Tanapol Nearunchorn - Yuwat Chuesathuchon ### Licensing Information LGPL-3.0 ### Citation Information See https://github.com/wongnai/wongnai-corpus ### Contributions Thanks to [@mapmeld](https://github.com/mapmeld), [@cstorm125](https://github.com/cstorm125) for adding this dataset.
false
<div align="center"> <img width="640" alt="keremberke/chest-xray-classification" src="https://huggingface.co/datasets/keremberke/chest-xray-classification/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['NORMAL', 'PNEUMONIA'] ``` ### Number of Images ```json {'train': 4077, 'test': 582, 'valid': 1165} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/chest-xray-classification", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/2](https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/2?ref=roboflow2huggingface) ### Citation ``` ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.ai on March 31, 2022 at 3:11 PM GMT It includes 5824 images. Pneumonia are annotated in folder format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 640x640 (Stretch) No image augmentation techniques were applied.
true
# Dataset Card for FLUE ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepage](https://github.com/getalp/Flaubert/tree/master/flue) - **Repository:**[github](https://github.com/getalp/Flaubert/tree/master/flue) - **Paper:**[paper](https://arxiv.org/abs/1912.05372) - **Leaderboard:**[leaderboard](https://github.com/getalp/Flaubert/tree/master/flue/leaderboard) - **Point of Contact:**[Hang Le](thi-phuong-hang.le@univ-grenoble-alpes.fr) ### Dataset Summary FLUE is an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language. The tasks and data are obtained from existing works, please refer to our Flaubert paper for a complete list of references. ### Supported Tasks and Leaderboards The supported tasks are: Text Classification, Paraphrasing, Natural Language Inference, Constituency Parsing, Dependency Parsing, Verb Sense Disambiguation and Noun Sense Disambiguation ### Languages The datasets are all in French. ## Dataset Structure ### Text Classification (CLS) This is a binary classification task. It consists in classifying Amazon reviews for three product categories: books, DVD, and music. Each sample contains a review text and the associated rating from 1 to 5 stars. Reviews rated above 3 is labeled as positive, and those rated less than 3 is labeled as negative. #### Data Instances An instance looks like: ``` { 'idx': 1, 'label': 0, 'text': 'Bilan plus que mitigé pour cet album fourre-tout qui mêle quelques bonnes idées (les parodies d\'oeuvres d\'art) et des scènetes qui ne font que faire écho paresseusement aux précédents albums. Uderzo n\'a pas pris de risque pour cet album, mais, au vu des précédents, on se dit que c\'est peut-être un moindre mal ... L\'album semble n\'avoir été fait que pour permettre à Uderzo de rappeler avec une insistance suspecte qu\'il est bien l\'un des créateurs d\'Astérix (comme lorsqu\'il se met en scène lui même dans la BD) et de traiter ses critiques d\' "imbéciles" dans une préface un rien aigrie signée "Astérix". Préface dans laquelle Uderzo feint de croire que ce qu\'on lui reproche est d\'avoir fait survivre Asterix à la disparition de Goscinny (reproche naturellement démenti par la fidélité des lecteurs - démonstration imparable !). On aurait tant aimé qu\'Uderzo accepte de s\'entourer d\'un scénariste compétent et respectueux de l\'esprit Goscinnien (cela doit se trouver !) et nous propose des albums plus ambitieux ...' } ``` #### Data Fields The dataset is composed of two fields: - **text**: the field that represents the text to classify. - **label**: the sentiment represented by the text, here **positive** or **negative**. #### Data Splits The train and test sets are balanced, including around 1k positive and 1k negative reviews for a total of 2k reviews in each dataset. We take the French portion to create the binary text classification task in FLUE and report the accuracy on the test set. ### Paraphrasing (PAWS-X) The task consists in identifying whether the two sentences in a pair are semantically equivalent or not. #### Data Instances An instance looks like: ``` { 'idx': 1, 'label': 0, 'sentence1': "À Paris, en octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, lui demandant un passeport pour retourner en Angleterre en passant par l'Écosse.", 'sentence2': "En octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, à Paris, et lui demanda un passeport pour retourner en Écosse par l'Angleterre." } ``` #### Data Fields The dataset is compososed of three fields: - **sentence1**: The first sentence of an example - **sentence2**: The second sentence of an example - **lalel**: **0** if the two sentences are not paraphrasing each other, **1** otherwise. #### Data Splits The train set includes 49.4k examples, the dev and test sets each comprises nearly 2k examples. We take the related datasets for French to perform the paraphrasing task and report the accuracy on the test set. ### Natural Language Inference (XNLI) The Natural Language Inference (NLI) task, also known as recognizing textual entailment (RTE), is to determine whether a premise entails, contradicts or neither entails nor contradicts a hypothesis. We take the French part of the XNLI corpus to form the development and test sets for the NLI task in FLUE. #### Data Instances An instance looks like: ``` { 'idx': 1, 'label': 2, 'hypo': 'Le produit et la géographie sont ce qui fait travailler la crème de la crème .', 'premise': "L' écrémage conceptuel de la crème a deux dimensions fondamentales : le produit et la géographie ." } ``` #### Data Fields The dataset is composed of three fields: - **premise**: Premise sentence. - **hypo**: Hypothesis sentence. - **label**: **contradiction** if the two sentences are contradictory, **entailment** if the two sentences entails, **neutral** if they neither entails or contradict each other. #### Data Splits The train set includes 392.7k examples, the dev and test sets comprises 2.5k and 5k examples respectively. We take the related datasets for French to perform the NLI task and report the accuracy on the test set. ### Word Sense Disambiguation for Verbs (WSD-V) The FrenchSemEval (FSE) dataset aims to evaluate the Word Sense Disambiguation for Verbs task for the French language. Extracted from Wiktionary. #### Data Instances An instance looks like: ``` { 'idx': 'd000.s001', 'sentence': ['"', 'Ce', 'ne', 'fut', 'pas', 'une', 'révolution', '2.0', ',', 'ce', 'fut', 'une', 'révolution', 'de', 'rue', '.'], 'fine_pos_tags': [27, 26, 18, 13, 18, 0, 6, 22, 27, 26, 13, 0, 6, 4, 6, 27], 'lemmas': ['"', 'ce', 'ne', 'être', 'pas', 'un', 'révolution', '2.0', ',', 'ce', 'être', 'un', 'révolution', 'de', 'rue', '.'], 'pos_tags': [13, 11, 14, 0, 14, 9, 15, 4, 13, 11, 0, 9, 15, 7, 15, 13], 'disambiguate_labels': ['__ws_1_2.0__adj__1'], 'disambiguate_tokens_ids': [7], } ``` #### Data Fields The dataset is composed of six fields: - **sentence**: The sentence to process split in tokens. - **pos_tags**: The corresponding POS tags for each tokens. - **lemmas**: The corresponding lemma for each tokens. - **fine_pos_tags**: Fined (more specific) POS tags for each tokens. - **disambiguate_tokens_ids**: The ID of the token in the sentence to disambiguate. - **disambiguate_labels**: The label in the form of **sentenceID __ws_sentence-number_token__pos__number-of-time-the-token-appeared-across-all-the-sentences** (i.e. **d000.s404.t000 __ws_2_agir__verb__1**). #### Data Splits The train set includes 269821 examples, the test set includes 3121 examples. ## Considerations for Using the Data ### Social Impact of Dataset The goal is to enable further reproducible experiments in the future and to share models and progress on the French language. ## Additional Information ### Licensing Information The licenses are: - The licensing status of the data, especially the news source text, is unknown for CLS - *The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.* for PAWS-X - CC BY-NC 4.0 for XNLI - The licensing status of the data, especially the news source text, is unknown for Verb Sense Disambiguation ### Citation Information ``` @misc{le2019flaubert, title={FlauBERT: Unsupervised Language Model Pre-training for French}, author={Hang Le and Loïc Vial and Jibril Frej and Vincent Segonne and Maximin Coavoux and Benjamin Lecouteux and Alexandre Allauzen and Benoît Crabbé and Laurent Besacier and Didier Schwab}, year={2019}, eprint={1912.05372}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@jplu](https://github.com/jplu) for adding this dataset.
false
# Dataset Card for xP3x ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/bigscience-workshop/xmtf - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:n.muennighoff@gmail.com) ### Dataset Summary > xP3x (Crosslingual Public Pool of Prompts eXtended) is a collection of prompts & datasets across 277 languages & 16 NLP tasks. It contains all of xP3 + much more! It is used for training future contenders of mT0 & BLOOMZ at project Aya @[C4AI](https://cohere.for.ai/) 🧡 > - **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3) together with the file in this repository named `xp3x_create.py`. We provide this version to save processing time. - **Languages:** 277 - **xP3 Dataset Family:** <table> <tr> <th>Name</th> <th>Explanation</th> <th>Example models</th> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t> <td>Mixture of 17 tasks in 277 languages with English prompts</td> <td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t> <td>Mixture of 13 training tasks in 46 languages with English prompts</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t> <td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t> <td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td> <td></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t> <td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t> <td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> </tr> </table> ## Dataset Structure ### Data Instances An example looks as follows: ```json { 'inputs': '11月、遂にクロームはファイヤーフォックスを引き離し始めた。_はインターネットユーザーの評価が高まったのだ。\nReplace the _ in the above sentence with the correct option: \n- ファイヤーフォックス\n- クローム', 'targets': 'クローム', 'language': 'jpn_Jpan', 'split': 'test', 'template': 'Replace', 'dataset': 'Muennighoff/xwinograd', 'config': 'jp' } ``` ### Data Fields The data fields are the same among all splits: - `inputs`: the natural language input fed to the model - `targets`: the natural language target that the model has to generate - `language`: The language code. The codes are an extension of the FLORES-200 codes, where the first part is the language code and the second part the script code. - `template`: The name of the prompt used. - `dataset`: The Hugging Face dataset identifier of where the data stems from. - `config`: The config of the Hugging Face dataset. ### Usage The dataset has 680 gigabytes and 530 million samples. You may want to filter it and then deduplicate depending on your needs. Loading by language: ```python # pip install -q datasets from datasets import load_dataset ds = load_dataset("Muennighoff/xP3x", "zho_Hans", streaming=True) # Use streaming to not download all at once for x in ds["train"]: print(x) break ``` You can then filter down by the data fields to e.g. only get certain configs or datasets. As every dataset-config-template is its own jsonl file, you can also decide on the datasets, configs and templates you want and only download them. For example, to download all Japanese xwinograd samples, you could do: ```python # pip install -q datasets from datasets import load_dataset import multiprocessing # pip install --upgrade huggingface-hub from huggingface_hub import HfFileSystem, hf_hub_url fs = HfFileSystem() fps = fs.glob(f"datasets/Muennighoff/xP3x/data/jpn_Jpan/*xwinograd*") resolved_paths = [fs.resolve_path(file) for file in fps] data_files = [hf_hub_url(resolved_path.repo_id, resolved_path.path_in_repo, repo_type=resolved_path.repo_type) for resolved_path in resolved_paths] ds = load_dataset("json", data_files=data_files, num_proc=8)["train"] ``` ### Data Splits |Language|Code|Kilobytes|%|Samples|%| |--------|------:|------:|-:|---:|-:| |Emilian|egl_Latn|104|0.0|402|0.0| |Swiss German|gsw_Latn|104|0.0|408|0.0| |Novial|nov_Latn|116|0.0|432|0.0| |Ainu (Latin script)|ain_Latn|120|0.0|410|0.0| |Chamorro|cha_Latn|120|0.0|452|0.0| |Gothic|got_Goth|120|0.0|402|0.0| |Prussian|prg_Latn|120|0.0|424|0.0| |Picard|pcd_Latn|140|0.0|530|0.0| |Northern Frisian|frr_Latn|156|0.0|554|0.0| |Uzbek (Latin script)|uzb_Latn|156|0.0|600|0.0| |Ottoman Turkish (Latin script)|ota_Latn|188|0.0|632|0.0| |Swahili (macrolanguage)|swa_Latn|212|0.0|772|0.0| |Talossan|tzl_Latn|220|0.0|836|0.0| |Kven Finnish|fkv_Latn|260|0.0|910|0.0| |Zaza|zza_Latn|260|0.0|1,056|0.0| |Frisian|fry_Latn|268|0.0|956|0.0| |Piemontese|pms_Latn|276|0.0|998|0.0| |Kalmyk|xal_Cyrl|288|0.0|976|0.0| |Hunsrik|hrx_Latn|352|0.0|1,380|0.0| |Romany|rom_Latn|364|0.0|1,410|0.0| |Ancient Greek (to 1453)|grc_Grek|392|0.0|1,226|0.0| |Tase Naga|nst_Latn|424|0.0|1,608|0.0| |Albanian|sqi_Latn|596|0.0|2,216|0.0| |Guadeloupean Creole French|gcf_Latn|608|0.0|2,326|0.0| |Yakut|sah_Cyrl|608|0.0|1,986|0.0| |Ho (Latin script)|hoc_Latn|632|0.0|2,634|0.0| |Khasi|kha_Latn|676|0.0|2,664|0.0| |Algerian Arabic|arq_Arab|688|0.0|2,278|0.0| |Lower Sorbian|dsb_Latn|692|0.0|2,596|0.0| |Chuvash|chv_Cyrl|716|0.0|2,446|0.0| |Old Russian|orv_Cyrl|752|0.0|2,586|0.0| |Pampanga|pam_Latn|784|0.0|2,984|0.0| |Kurdish (Latin script)|kur_Latn|796|0.0|3,050|0.0| |Ottoman Turkish|ota_Arab|832|0.0|2,772|0.0| |Kotava|avk_Latn|864|0.0|3,118|0.0| |Upper Sorbian|hsb_Latn|900|0.0|3,474|0.0| |Buryat|bua_Cyrl|924|0.0|3,218|0.0| |Swabian|swg_Latn|996|0.0|3,366|0.0| |Coastal Kadazan|kzj_Latn|1,136|0.0|3,766|0.0| |Chavacano|cbk_Latn|1,352|0.0|4,994|0.0| |Quechua|que_Latn|1,704|0.0|5,312|0.0| |Lingua Franca Nova (Cyrillic script)|lfn_Cyrl|1,740|0.0|5,458|0.0| |Gronings|gos_Latn|1,864|0.0|7,462|0.0| |Volapük|vol_Latn|1,948|0.0|7,712|0.0| |Yue Chinese (Simplified)|yue_Hans|2,300|0.0|7,872|0.0| |Mari (Russia)|chm_Cyrl|2,540|0.0|7,496|0.0| |Kadazan Dusun|dtp_Latn|2,548|0.0|8,892|0.0| |Breton|bre_Latn|3,048|0.0|11,868|0.0| |Ladino|lad_Latn|3,224|0.0|11,916|0.0| |Cornish|cor_Latn|3,492|0.0|13,880|0.0| |Interlingue|ile_Latn|3,700|0.0|14,468|0.0| |Wu Chinese|wuu_Hans|3,784|0.0|13,062|0.0| |Japanese (Katakana)|jpn_Kana|4,208|0.0|13,942|0.0| |Ido|ido_Latn|6,180|0.0|23,742|0.0| |Yiddishi|yid_Hebr|9,896|0.0|34,412|0.01| |Klingon|tlh_Latn|11,716|0.0|46,010|0.01| |Lingua Franca Nova|lfn_Latn|13,328|0.0|46,826|0.01| |Lojban|jbo_Latn|17,468|0.0|66,694|0.01| |Low German|nds_Latn|18,364|0.0|68,098|0.01| |Interlingua (International Auxiliary Language Association)|ina_Latn|25,700|0.0|76,584|0.01| |Java|java|25,904|0.0|13,551|0.0| |Japanese (Kanji)|jpn_Hani|26,292|0.0|89,978|0.02| |Norwegian|nor_Latn|26,724|0.0|93,116|0.02| |Toki Pona|toki_Latn|26,808|0.0|97,170|0.02| |Latin|lat_Latn|28,900|0.0|101,390|0.02| |Serbo-Croatian|hbs_Latn|29,452|0.0|105,748|0.02| |Nigerian Pidgin|pcm_Latn|145,872|0.02|88,992|0.02| |Azerbaijani (South or North; Latin script)|aze_Latn|147,564|0.02|77,875|0.01| |Serbian (Latin script)|srp_Latn|179,072|0.03|131,101|0.02| |Japanese (Hiragana)|jpn_Hira|188,944|0.03|628,758|0.12| |Berber (Latin script)|ber_Latn|201,464|0.03|693,602|0.13| |Jupyter Notebook|jupyter-notebook|416,056|0.06|400,000|0.08| |Yue Chinese|yue_Hant|613,352|0.09|1,227,429|0.23| |Haitian Creole|hat_Latn|629,420|0.09|1,228,281|0.23| |Mossi|mos_Latn|630,416|0.09|1,223,481|0.23| |Pangasinan|pag_Latn|630,684|0.09|1,223,481|0.23| |Twi|twi_Latn|631,172|0.09|1,223,481|0.23| |Bosnian|bos_Latn|633,016|0.09|1,224,479|0.23| |Ewe|ewe_Latn|633,292|0.09|1,223,481|0.23| |Bambara|bam_Latn|634,520|0.09|1,223,481|0.23| |Javanese|jav_Latn|635,248|0.09|1,224,003|0.23| |Southwestern Dinka|dik_Latn|635,416|0.09|1,223,481|0.23| |Kabuverdianu|kea_Latn|636,144|0.09|1,223,481|0.23| |Dyula|dyu_Latn|636,464|0.09|1,223,481|0.23| |Venetian|vec_Latn|637,412|0.09|1,223,481|0.23| |Chokwe|cjk_Latn|637,532|0.09|1,223,481|0.23| |Latgalian|ltg_Latn|637,612|0.09|1,223,481|0.23| |Sundanese|sun_Latn|638,120|0.09|1,223,481|0.23| |Asturian|ast_Latn|638,708|0.09|1,223,481|0.23| |Akan|aka_Latn|639,648|0.09|1,223,481|0.23| |Mizo|lus_Latn|639,680|0.09|1,223,481|0.23| |Guarani|grn_Latn|641,540|0.09|1,225,647|0.23| |Limburgish|lim_Latn|642,368|0.09|1,223,481|0.23| |Faroese|fao_Latn|642,432|0.09|1,224,067|0.23| |Buginese|bug_Latn|643,472|0.09|1,223,481|0.23| |Sango|sag_Latn|643,596|0.09|1,223,481|0.23| |Luba-Kasai|lua_Latn|643,640|0.09|1,223,481|0.23| |Papiamento|pap_Latn|643,648|0.09|1,223,481|0.23| |Silesian|szl_Latn|644,608|0.09|1,223,481|0.23| |Sicilian|scn_Latn|645,636|0.1|1,223,481|0.23| |Kimbundu|kmb_Latn|645,964|0.1|1,223,481|0.23| |Basque|eus_Latn|646,084|0.1|1,246,877|0.23| |Balinese|ban_Latn|646,408|0.1|1,223,481|0.23| |Norwegian Nynorsk|nno_Latn|646,996|0.1|1,229,699|0.23| |Central Aymara|ayr_Latn|647,236|0.1|1,223,481|0.23| |Tamasheq (Latin script)|taq_Latn|648,656|0.1|1,223,481|0.23| |Kikongo|kon_Latn|648,992|0.1|1,223,481|0.23| |Friulian|fur_Latn|649,272|0.1|1,223,481|0.23| |Ayacucho Quechua|quy_Latn|649,992|0.1|1,223,481|0.23| |Maori|mri_Latn|650,336|0.1|1,224,211|0.23| |Icelandic|isl_Latn|650,372|0.1|1,246,623|0.23| |Galician|glg_Latn|652,088|0.1|1,233,291|0.23| |Catalan|cat_Latn|652,116|0.1|1,241,381|0.23| |Lombard|lmo_Latn|652,120|0.1|1,223,481|0.23| |Banjar (Latin script)|bjn_Latn|652,372|0.1|1,223,481|0.23| |Fijian|fij_Latn|652,796|0.1|1,223,481|0.23| |Crimean Tatar|crh_Latn|653,920|0.1|1,223,895|0.23| |Northern Kurdish|kmr_Latn|654,108|0.1|1,223,481|0.23| |Ligurian|lij_Latn|654,432|0.1|1,223,481|0.23| |Occitan|oci_Latn|655,676|0.1|1,227,945|0.23| |Turkmen|tuk_Latn|658,672|0.1|1,241,205|0.23| |Luxembourgish|ltz_Latn|658,768|0.1|1,225,339|0.23| |Cebuano|ceb_Latn|659,124|0.1|1,226,039|0.23| |Samoan|smo_Latn|659,704|0.1|1,223,481|0.23| |Sardinian|srd_Latn|660,000|0.1|1,223,481|0.23| |Bemba|bem_Latn|660,504|0.1|1,223,481|0.23| |Minangkabau (Latin script)|min_Latn|660,672|0.1|1,223,481|0.23| |Acehnese (Latin script)|ace_Latn|661,084|0.1|1,223,481|0.23| |Ilocano|ilo_Latn|661,184|0.1|1,227,663|0.23| |Irish|gle_Latn|661,660|0.1|1,227,357|0.23| |Fon|fon_Latn|663,124|0.1|1,223,481|0.23| |Waray|war_Latn|664,120|0.1|1,226,503|0.23| |Norwegian Bokmål|nob_Latn|666,240|0.1|1,300,607|0.24| |Tosk Albanian|als_Latn|666,692|0.1|1,223,481|0.23| |Standard Malay|zsm_Latn|667,088|0.1|1,270,715|0.24| |Southern Sotho|sot_Latn|667,728|0.1|1,223,481|0.23| |Kabyle|kab_Latn|668,128|0.1|1,346,605|0.25| |Jingpho|kac_Latn|669,464|0.1|1,223,481|0.23| |Lingala|lin_Latn|670,428|0.1|1,323,481|0.25| |Wolof|wol_Latn|670,568|0.1|1,373,481|0.26| |Central Kanuri (Latin script)|knc_Latn|670,800|0.1|1,223,481|0.23| |Kikuyu|kik_Latn|672,096|0.1|1,223,481|0.23| |Tok Pisin|tpi_Latn|672,916|0.1|1,223,481|0.23| |Nuer|nus_Latn|673,632|0.1|1,223,481|0.23| |Tagalog|tgl_Latn|673,684|0.1|1,247,417|0.23| |Tumbuka|tum_Latn|676,948|0.1|1,223,481|0.23| |Plateau Malagasy|plt_Latn|677,852|0.1|1,223,481|0.23| |Afrikaans|afr_Latn|679,164|0.1|1,337,091|0.25| |North Azerbaijani|azj_Latn|679,820|0.1|1,223,481|0.23| |Kabiyè|kbp_Latn|684,880|0.1|1,223,481|0.23| |Modern Standard Arabic (Romanized)|arb_Latn|685,408|0.1|1,223,481|0.23| |Scottish Gaelic|gla_Latn|708,620|0.1|1,243,627|0.23| |Sindhi|snd_Arab|718,680|0.11|1,223,481|0.23| |North Levantine Arabic|apc_Arab|720,048|0.11|1,223,481|0.23| |Tunisian Arabic|aeb_Arab|720,360|0.11|1,223,481|0.23| |South Levantine Arabic|ajp_Arab|720,488|0.11|1,223,481|0.23| |Dari|prs_Arab|720,500|0.11|1,223,481|0.23| |Moroccan Arabic|ary_Arab|722,904|0.11|1,223,481|0.23| |Egyptian Arabic|arz_Arab|723,356|0.11|1,223,481|0.23| |Najdi Arabic|ars_Arab|725,784|0.11|1,223,481|0.23| |Acehnese (Arabic script)|ace_Arab|726,272|0.11|1,223,481|0.23| |Mesopotamian Arabic|acm_Arab|728,472|0.11|1,223,481|0.23| |Ta’izzi-Adeni Arabic|acq_Arab|734,780|0.11|1,223,481|0.23| |South Azerbaijani|azb_Arab|735,728|0.11|1,223,481|0.23| |Central Kanuri (Arabic script)|knc_Arab|746,936|0.11|1,223,481|0.23| |Rundi|run_Latn|749,792|0.11|1,296,111|0.24| |Banjar (Arabic script)|bjn_Arab|751,112|0.11|1,223,481|0.23| |Central Kurdish|ckb_Arab|756,804|0.11|1,223,481|0.23| |Bashkir|bak_Cyrl|758,816|0.11|1,223,481|0.23| |Kashmiri (Arabic script)|kas_Arab|759,140|0.11|1,223,481|0.23| |Tatar|tat_Cyrl|764,212|0.11|1,247,685|0.23| |Minangkabau (Arabic script)|min_Arab|765,384|0.11|1,223,481|0.23| |Kazakh|kaz_Cyrl|766,176|0.11|1,232,697|0.23| |Halh Mongolian|khk_Cyrl|776,384|0.11|1,224,353|0.23| |Tajik|tgk_Cyrl|780,452|0.11|1,223,481|0.23| |Eastern Yiddish|ydd_Hebr|781,452|0.12|1,223,481|0.23| |Uyghur|uig_Arab|785,444|0.12|1,256,999|0.24| |Armenian|hye_Armn|789,952|0.12|1,228,171|0.23| |Hebrew|heb_Hebr|793,144|0.12|1,604,365|0.3| |Belarusian|bel_Cyrl|806,588|0.12|1,261,197|0.24| |Macedonian|mkd_Cyrl|813,436|0.12|1,384,567|0.26| |Welsh|cym_Latn|821,036|0.12|1,321,455|0.25| |Northern Uzbek|uzn_Latn|835,560|0.12|1,273,404|0.24| |Central Atlas Tamazight|tzm_Tfng|843,508|0.12|1,223,481|0.23| |Tamasheq (Tifinagh script)|taq_Tfng|848,104|0.12|1,223,481|0.23| |Magahi|mag_Deva|851,360|0.13|1,223,481|0.23| |Bhojpuri|bho_Deva|854,848|0.13|1,223,481|0.23| |Awadhi|awa_Deva|857,096|0.13|1,224,037|0.23| |Chhattisgarhi|hne_Deva|859,332|0.13|1,223,481|0.23| |Kyrgyz|kir_Cyrl|860,700|0.13|1,250,163|0.23| |Maithili|mai_Deva|863,476|0.13|1,223,481|0.23| |Assamese|asm_Beng|865,904|0.13|1,223,481|0.23| |Kashmiri (Devanagari script)|kas_Deva|867,232|0.13|1,223,481|0.23| |Sanskrit|san_Deva|879,236|0.13|1,223,481|0.23| |Lao|lao_Laoo|888,240|0.13|1,223,481|0.23| |Odia|ory_Orya|890,508|0.13|1,223,481|0.23| |Santali|sat_Olck|902,300|0.13|1,223,481|0.23| |Kannada|kan_Knda|909,260|0.13|1,223,481|0.23| |Meitei (Bengali script)|mni_Beng|917,984|0.14|1,223,481|0.23| |Georgian|kat_Geor|928,712|0.14|1,226,729|0.23| |Kamba|kam_Latn|936,468|0.14|2,136,615|0.4| |Tigrinya|tir_Ethi|949,608|0.14|1,276,536|0.24| |Swati|ssw_Latn|950,564|0.14|2,195,002|0.41| |Malayalam|mal_Mlym|953,984|0.14|1,225,083|0.23| |Nigerian Fulfulde|fuv_Latn|956,328|0.14|2,126,652|0.4| |Umbundu|umb_Latn|974,104|0.14|2,264,553|0.43| |Ganda|lug_Latn|975,780|0.14|2,273,481|0.43| |Northern Sotho|nso_Latn|978,484|0.14|2,250,971|0.42| |Khmer|khm_Khmr|984,756|0.14|1,227,825|0.23| |Luo|luo_Latn|993,068|0.15|2,249,242|0.42| |Standard Tibetan|bod_Tibt|993,732|0.15|1,223,481|0.23| |Tswana|tsn_Latn|1,009,328|0.15|2,323,481|0.44| |Kinyarwanda|kin_Latn|1,010,752|0.15|2,273,481|0.43| |Sinhala|sin_Sinh|1,012,012|0.15|1,256,582|0.24| |Xhosa|xho_Latn|1,019,804|0.15|2,323,481|0.44| |Shona|sna_Latn|1,026,320|0.15|2,273,481|0.43| |Esperanto|epo_Latn|1,029,444|0.15|2,612,083|0.49| |Tsonga|tso_Latn|1,031,856|0.15|2,323,481|0.44| |Dzongkha|dzo_Tibt|1,033,552|0.15|1,223,481|0.23| |Zulu|zul_Latn|1,039,296|0.15|2,323,481|0.44| |Serbian|srp_Cyrl|1,040,024|0.15|1,362,598|0.26| |Nyanja|nya_Latn|1,061,780|0.16|2,323,481|0.44| |Shan|shn_Mymr|1,074,940|0.16|1,223,481|0.23| |Igbo|ibo_Latn|1,095,300|0.16|2,282,301|0.43| |Hausa|hau_Latn|1,112,272|0.16|2,335,738|0.44| |West Central Oromo|gaz_Latn|1,115,600|0.16|2,343,260|0.44| |Nepali|npi_Deva|1,144,676|0.17|1,281,430|0.24| |Yoruba|yor_Latn|1,164,540|0.17|2,334,801|0.44| |Southern Pashto|pbt_Arab|1,170,840|0.17|1,365,533|0.26| |Somali|som_Latn|1,198,320|0.18|2,482,437|0.47| |Burmese|mya_Mymr|1,228,196|0.18|1,279,882|0.24| |Amharic|amh_Ethi|1,261,128|0.19|1,980,215|0.37| |Eastern Panjabi|pan_Guru|1,305,636|0.19|1,307,897|0.25| |Gujarati|guj_Gujr|1,331,780|0.2|1,317,314|0.25| |Marathi|mar_Deva|1,494,024|0.22|1,443,950|0.27| |Bengali|ben_Beng|1,650,272|0.24|1,411,514|0.27| |Chinese (Traditional)|zho_Hant|1,778,736|0.26|1,956,189|0.37| |Tamil|tam_Taml|1,833,328|0.27|1,394,473|0.26| |Swahili|swh_Latn|1,970,784|0.29|4,185,608|0.79| |Telugu|tel_Telu|2,224,480|0.33|1,573,325|0.3| |Ukrainian|ukr_Cyrl|2,227,616|0.33|2,216,119|0.42| |Western Persian|pes_Arab|2,389,340|0.35|1,811,121|0.34| |Turkish|tur_Latn|3,106,600|0.46|4,146,153|0.78| |Urdu|urd_Arab|3,553,960|0.52|3,513,218|0.66| |Korean|kor_Hang|4,642,468|0.68|3,415,920|0.64| |Python|python|4,728,504|0.7|3,142,962|0.59| |Japanese|jpn_Jpan|5,079,788|0.75|4,193,570|0.79| |Thai|tha_Thai|6,860,704|1.01|4,666,299|0.88| |Chinese (Simplified)|zho_Hans|8,063,684|1.19|7,355,509|1.38| |Vietnamese|vie_Latn|8,398,824|1.24|6,194,925|1.16| |Indonesian|ind_Latn|9,380,144|1.38|5,301,812|1.0| |Hindi|hin_Deva|9,914,328|1.46|5,612,176|1.05| |Croatian|hrv_Latn|10,028,028|1.48|5,583,975|1.05| |Modern Standard Arabic|arb_Arab|11,051,064|1.63|7,232,551|1.36| |Romanian|ron_Latn|11,441,636|1.68|5,594,927|1.05| |Maltese|mlt_Latn|11,614,488|1.71|5,513,885|1.04| |Slovenian|slv_Latn|12,014,912|1.77|5,533,689|1.04| |Estonian|est_Latn|12,126,212|1.79|5,584,057|1.05| |Lithuanian|lit_Latn|12,253,976|1.8|5,603,047|1.05| |Slovak|slk_Latn|12,286,300|1.81|5,513,481|1.04| |Standard Latvian|lvs_Latn|12,298,584|1.81|5,517,287|1.04| |Polish|pol_Latn|12,409,684|1.83|5,868,631|1.1| |Hungarian|hun_Latn|12,607,420|1.86|6,086,621|1.14| |Russian|rus_Cyrl|13,110,908|1.93|8,798,927|1.65| |Czech|ces_Latn|14,316,052|2.11|6,418,462|1.21| |Bulgarian|bul_Cyrl|14,615,468|2.15|7,265,885|1.37| |Swedish|swe_Latn|14,646,656|2.16|5,634,363|1.06| |Finnish|fin_Latn|15,011,464|2.21|6,077,501|1.14| |Danish|dan_Latn|16,136,612|2.38|5,831,109|1.1| |Dutch|nld_Latn|22,387,020|3.3|8,992,864|1.69| |Greek|ell_Grek|23,144,296|3.41|7,224,001|1.36| |Italian|ita_Latn|23,952,824|3.53|9,967,738|1.87| |Portuguese|por_Latn|27,297,252|4.02|11,242,808|2.11| |German|deu_Latn|27,909,808|4.11|15,806,969|2.97| |French|fra_Latn|28,428,608|4.18|16,365,984|3.08| |Spanish|spa_Latn|30,969,580|4.56|16,315,928|3.07| |English|eng_Latn|69,530,384|10.24|53,015,690|9.96| |Total|-|679,318,704|100|532,107,156|100| #### Language specifics - `Japanese`: Data in `jpn_Hira`, `jpn_Kana`, `jpn_Hani` is guaranteed to have Hiragana, Katakana or Kanji, respectively in each sample. However, they may still include other styles. So while all samples in `jpn_Kana` are guaranteed to have Katakana, there may still be Hiragana or Kanji. ## Dataset Creation ### Source Data #### Training datasets - Code Miscellaneous - [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex) - [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus) - [GreatCode](https://huggingface.co/datasets/great_code) - [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes) - Closed-book QA - [Hotpot QA](https://huggingface.co/datasets/hotpot_qa) - [Trivia QA](https://huggingface.co/datasets/trivia_qa) - [Web Questions](https://huggingface.co/datasets/web_questions) - [Wiki QA](https://huggingface.co/datasets/wiki_qa) - Extractive QA - [Adversarial QA](https://huggingface.co/datasets/adversarial_qa) - [CMRC2018](https://huggingface.co/datasets/cmrc2018) - [DRCD](https://huggingface.co/datasets/clue) - [DuoRC](https://huggingface.co/datasets/duorc) - [MLQA](https://huggingface.co/datasets/mlqa) - [Quoref](https://huggingface.co/datasets/quoref) - [ReCoRD](https://huggingface.co/datasets/super_glue) - [ROPES](https://huggingface.co/datasets/ropes) - [SQuAD v2](https://huggingface.co/datasets/squad_v2) - [xQuAD](https://huggingface.co/datasets/xquad) - TyDI QA - [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary) - [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp) - Multiple-Choice QA - [ARC](https://huggingface.co/datasets/ai2_arc) - [C3](https://huggingface.co/datasets/c3) - [CoS-E](https://huggingface.co/datasets/cos_e) - [Cosmos](https://huggingface.co/datasets/cosmos) - [DREAM](https://huggingface.co/datasets/dream) - [MultiRC](https://huggingface.co/datasets/super_glue) - [OpenBookQA](https://huggingface.co/datasets/openbookqa) - [PiQA](https://huggingface.co/datasets/piqa) - [QUAIL](https://huggingface.co/datasets/quail) - [QuaRel](https://huggingface.co/datasets/quarel) - [QuaRTz](https://huggingface.co/datasets/quartz) - [QASC](https://huggingface.co/datasets/qasc) - [RACE](https://huggingface.co/datasets/race) - [SciQ](https://huggingface.co/datasets/sciq) - [Social IQA](https://huggingface.co/datasets/social_i_qa) - [Wiki Hop](https://huggingface.co/datasets/wiki_hop) - [WiQA](https://huggingface.co/datasets/wiqa) - Paraphrase Identification - [MRPC](https://huggingface.co/datasets/super_glue) - [PAWS](https://huggingface.co/datasets/paws) - [PAWS-X](https://huggingface.co/datasets/paws-x) - [QQP](https://huggingface.co/datasets/qqp) - Program Synthesis - [APPS](https://huggingface.co/datasets/codeparrot/apps) - [CodeContests](https://huggingface.co/datasets/teven/code_contests) - [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) - [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp) - [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search) - [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code) - Structure-to-text - [Common Gen](https://huggingface.co/datasets/common_gen) - [Wiki Bio](https://huggingface.co/datasets/wiki_bio) - Sentiment - [Amazon](https://huggingface.co/datasets/amazon_polarity) - [App Reviews](https://huggingface.co/datasets/app_reviews) - [IMDB](https://huggingface.co/datasets/imdb) - [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes) - [Yelp](https://huggingface.co/datasets/yelp_review_full) - Simplification - [BiSECT](https://huggingface.co/datasets/GEM/BiSECT) - Summarization - [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail) - [Gigaword](https://huggingface.co/datasets/gigaword) - [MultiNews](https://huggingface.co/datasets/multi_news) - [SamSum](https://huggingface.co/datasets/samsum) - [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua) - [XLSum](https://huggingface.co/datasets/GEM/xlsum) - [XSum](https://huggingface.co/datasets/xsum) - Topic Classification - [AG News](https://huggingface.co/datasets/ag_news) - [DBPedia](https://huggingface.co/datasets/dbpedia_14) - [TNEWS](https://huggingface.co/datasets/clue) - [TREC](https://huggingface.co/datasets/trec) - [CSL](https://huggingface.co/datasets/clue) - Translation - [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200) - [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt) - [MultiEURLEX](https://huggingface.co/datasets/multi_eurlex) - Word Sense disambiguation - [WiC](https://huggingface.co/datasets/super_glue) - [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic) - Natural Language Inference (NLI) - [ANLI](https://huggingface.co/datasets/anli) - [CB](https://huggingface.co/datasets/super_glue) - [RTE](https://huggingface.co/datasets/super_glue) - [XNLI](https://huggingface.co/datasets/xnli) - Coreference Resolution - [Winogrande](https://huggingface.co/datasets/winogrande) - [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd) - Sentence Completion - [COPA](https://huggingface.co/datasets/super_glue) - [Story Cloze](https://huggingface.co/datasets/story_cloze) - [XCOPA](https://huggingface.co/datasets/xcopa) - [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze) #### Dataset specifics - Flores-200: There are three prompts for Flores: `continuation`, `question`, `command`, which represent three commonly used prompting styles, i.e. making a prompt seem like a natural continuation, turning it into a question or commanding the model to do something. - tatoeba_mt: Contains duplicates. For example, it has data that is both classified as `jpn_Kana` and `jpn_Jpan`, so you may want to deduplicate. ## Additional Information ### Licensing Information The dataset collection is released under Apache 2.0. Note that individual datasets may have different licenses. ### Citation Information ```bibtex @article{muennighoff2022crosslingual, title={Crosslingual generalization through multitask finetuning}, author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others}, journal={arXiv preprint arXiv:2211.01786}, year={2022} } ``` ### Contributions Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. Thanks to the Aya team @[C4AI](https://cohere.for.ai/) 🧡
false
# Dataset Card for Common Crawl Domain Names ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/google-research-datasets/common-crawl-domain-names - **Repository:** https://github.com/google-research-datasets/common-crawl-domain-names - **Paper:** https://arxiv.org/pdf/2011.03138 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. "commoncrawl" to "common crawl"). Breaking [domain names](https://developer.mozilla.org/en-US/docs/Learn/Common_questions/What_is_a_URL) such as "openresearch" into component words "open" and "research" is important for applications such as Text-to-Speech synthesis and web search. [Common Crawl](https://commoncrawl.org/) is an open repository of web crawl data that can be accessed and analyzed by anyone. Specifically, we scraped the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so we had to manually annotate the data. ### Supported Tasks and Leaderboards - Text-to-Speech synthesis - Web search ### Languages en: English ## Dataset Structure ### Data Instances Each sample is an example of space separated segments of a domain name. The examples are stored in their original letter casing, but harder and more interesting examples can be generated by lowercasing the input first. For example: ``` Open B S D NASA ASAP Workouts ``` ### Data Fields - `example`: a `string` feature: space separated segments of a domain name. ### Data Splits | split | size | trivial | avg_input_length | avg_segments | |-------|-------|---------|------------------|--------------| | train | 17572 | 13718 | 12.63 | 2.65 | | eval | 1953 | 1536 | 12.77 | 2.67 | | test | 2170 | 1714 | 12.63 | 2.66 | ## Dataset Creation ### Curation Rationale The dataset was curated by scraping the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so the curators of the dataset had to manually annotate the data. ### Source Data #### Initial Data Collection and Normalization Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The annotators are the curators of this dataset ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The curators of this dataset are [Jae Hun Ro](https://github.com/JaeHunRo) and [mwurts4google](https://github.com/mwurts4google), who are the contributors of the official Github repository for this dataset. Since the account handles of other curators are unknown currently, the authors of the paper linked to this dataset is mentioned here as curators, [Hao Zhang](https://arxiv.org/search/cs?searchtype=author&query=Zhang%2C+H), [Jae Ro](https://arxiv.org/search/cs?searchtype=author&query=Ro%2C+J), and [Richard Sproat](https://arxiv.org/search/cs?searchtype=author&query=Sproat%2C+R). ### Licensing Information [MIT License](https://github.com/google-research-datasets/common-crawl-domain-names/blob/master/LICENSE) ### Citation Information ``` @inproceedings{zrs2020urlsegmentation, title={Semi-supervised URL Segmentation with Recurrent Neural Networks Pre-trained on Knowledge Graph Entities}, author={Hao Zhang and Jae Ro and Richard William Sproat}, booktitle={The 28th International Conference on Computational Linguistics (COLING 2020)}, year={2020} } ``` ### Contributions Thanks to [@Karthik-Bhaskar](https://github.com/Karthik-Bhaskar) for adding this dataset.
false
# Dataset Card for "lmqg/qg_jaquad" ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992). This is [JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset compiled for question generation (QG) task. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set. There are no overlap in terms of the paragraph across train, test, and validation split. ### Supported Tasks and Leaderboards * `question-generation`: The dataset is assumed to be used to train a model for question generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail). ### Languages Japanese (ja) ## Dataset Structure An example of 'train' looks as follows. ``` { "question": "新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?", "paragraph": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。", "answer": "保守費用", "sentence": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。", "paragraph_sentence": "<hl>三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。<hl>新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。", "paragraph_answer": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、<hl>保守費用<hl>を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。", "sentence_answer": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、<hl>保守費用<hl>を抑えた新型車両として6000系が構想された。" } ``` The data fields are the same among all splits. - `question`: a `string` feature. - `paragraph`: a `string` feature. - `answer`: a `string` feature. - `sentence`: a `string` feature. - `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`. - `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`. - `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`. Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model, but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and `paragraph_sentence` feature is for sentence-aware question generation. ## Data Splits |train|validation|test | |----:|---------:|----:| |27809| 3939| 3939| ## Citation Information ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
false