text-classification
bool
2 classes
text
stringlengths
0
664k
false
# Dataset Card for the args.me corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Usage](#dataset-usage) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://zenodo.org/record/4139439 - **Repository:** https://git.webis.de/code-research/arguana/args/args-framework - **Paper:** [Building an Argument Search Engine for the Web](https://webis.de/downloads/publications/papers/wachsmuth_2017f.pdf) - **Leaderboard:** https://touche.webis.de/ - **Point of Contact:** [Webis Group](https://webis.de/people.html) ### Dataset Summary The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, Debatepedia, and Debate.org. The arguments are extracted using heuristics that are designed for each debate portal. ### Dataset Usage ```python import datasets args = datasets.load_dataset('webis/args_me', 'corpus', streaming=True) args_iterator = iter(args) for arg in args_iterator: print(args['conclusion']) print(args['id']) print(args['argument']) print(args['stance']) break ``` ### Supported Tasks and Leaderboards Document Retrieval, Argument Retrieval for Controversial Questions ### Languages The args.me corpus is monolingual; it only includes English (mostly en-US) documents. ## Dataset Structure ### Data Instances #### Corpus ``` {'conclusion': 'Science is the best!', 'id': 'd6517702-2019-04-18T12:36:24Z-00000-000', 'argument': 'Science is aright I guess, but Physical Education (P.E) is better. Think about it, you could sit in a classroom for and hour learning about molecular reconfiguration, or you could play football with your mates. Why would you want to learn about molecular reconfiguration anyway? I think the argument here would be based on, healthy mind or healthy body. With science being the healthy mind and P.E being the healthy body. To work this one out all you got to do is ask Steven Hawkins. Only 500 words', 'stance': 'CON'} ``` ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @dataset{yamen_ajjour_2020_4139439, author = {Yamen Ajjour and Henning Wachsmuth and Johannes Kiesel and Martin Potthast and Matthias Hagen and Benno Stein}, title = {args.me corpus}, month = oct, year = 2020, publisher = {Zenodo}, version = {1.0-cleaned}, doi = {10.5281/zenodo.4139439}, url = {https://doi.org/10.5281/zenodo.4139439} } ```
false
# Dataset Card for [Smithsonian Butterflies] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0) ### Dataset Summary High-res images from Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections. Crawled ### Supported Tasks and Leaderboards Includes metadata about the scientific name of butterflies, but there maybe missing values. Might be good for classification. ### Languages English ## Dataset Structure ### Data Instances # Example data ``` {'image_url': 'https://ids.si.edu/ids/deliveryService?id=ark:/65665/m3b3132f6666904de396880d9dc811c5cd', 'image_alt': 'view Aholibah Underwing digital asset number 1', 'id': 'ark:/65665/m3b3132f6666904de396880d9dc811c5cd', 'name': 'Aholibah Underwing', 'scientific_name': 'Catocala aholibah', 'gender': None, 'taxonomy': 'Animalia, Arthropoda, Hexapoda, Insecta, Lepidoptera, Noctuidae, Catocalinae', 'region': None, 'locality': None, 'date': None, 'usnm_no': 'EO400317-DSP', 'guid': 'http://n2t.net/ark:/65665/39b506292-715f-45a7-8511-b49bb087c7de', 'edan_url': 'edanmdm:nmnheducation_10866595', 'source': 'Smithsonian Education and Outreach collections', 'stage': None, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2000x1328 at 0x7F57D0504DC0>, 'image_hash': '27a5fe92f72f8b116d3b7d65bac84958', 'sim_score': 0.8440760970115662} ​ ``` ### Data Fields sim-score indicates clip score for "pretty butterfly". This is to eliminate non-butterfly images(just id card images etc) ### Data Splits No specific split exists. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] Crawled from "Education and Outreach" & "NMNH - Entomology Dept." collections found online [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0) #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Doesn't include all butterfly species ## Additional Information ### Dataset Curators Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections ### Licensing Information Only results marked: CC0 ### Citation Information [More Information Needed] ### Contributions Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
false
# Dataset Card for "FiNER-ORD" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation and Annotation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contact Information](#contact-information) ## Dataset Description - **Homepage:** [https://github.com/gtfintechlab/FiNER](https://github.com/gtfintechlab/FiNER) - **Repository:** [https://github.com/gtfintechlab/FiNER](https://github.com/gtfintechlab/FiNER) - **Paper:** [Arxiv Link]() - **Point of Contact:** [Agam A. Shah](https://shahagam4.github.io/) - **Size of train dataset file:** 1.08 MB - **Size of validation dataset file:** 135 KB - **Size of test dataset file:** 336 KB ### Dataset Summary The FiNER-Open Research Dataset (FiNER-ORD) consists of a manually annotated dataset of financial news articles (in English) collected from [webz.io] (https://webz.io/free-datasets/financial-news-articles/). In total, there are 47851 news articles available in this data at the point of writing this paper. Each news article is available in the form of a JSON document with various metadata information like the source of the article, publication date, author of the article, and the title of the article. For the manual annotation of named entities in financial news, we randomly sampled 220 documents from the entire set of news articles. We observed that some articles were empty in our sample, so after filtering the empty documents, we were left with a total of 201 articles. We use [Doccano](https://github.com/doccano/doccano), an open-source annotation tool, to ingest the raw dataset and manually label person (PER), location (LOC), and organization (ORG) entities. For our experiments, we use the manually labeled FiNER-ORD to benchmark model performance. Thus, we make a train, validation, and test split of FiNER-ORD. To avoid biased results, manual annotation is performed by annotators who have no knowledge about the labeling functions for the weak supervision framework. The train and validation sets are annotated by two separate annotators and validated by a third annotator. The test dataset is annotated by another annotator. We present a manual annotation guide in the Appendix of the paper detailing the procedures used to create the manually annotated FiNER-ORD. After manual annotation, the news articles are split into sentences. We then tokenize each sentence, employing a script to tokenize multi-token entities into separate tokens (e.g. PER_B denotes the beginning token of a person (PER) entity and PER_I represents intermediate PER tokens). We exclude white spaces when tokenizing multi-token entities. The descriptive statistics on the resulting FiNER-ORD are available in the Table of [Data Splits](#data-splits) section. For more details check [information in paper]() ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages - It is a monolingual English dataset ## Dataset Structure ### Data Instances #### FiNER-ORD - **Size of train dataset file:** 1.08 MB - **Size of validation dataset file:** 135 KB - **Size of test dataset file:** 336 KB ### Data Fields The data fields are the same among all splits. #### conll2003 - `doc_idx`: Document ID (`int`) - `sent_idx`: Sentence ID within each document (`int`) - `gold_token`: Token (`string`) - `gold_label`: a `list` of classification labels (`int`). Full tagset with indices: ```python {'O': 0, 'PER_B': 1, 'PER_I': 2, 'LOC_B': 3, 'LOC_I': 4, 'ORG_B': 5, 'ORG_I': 6} ``` ### Data Splits | **FiNER-ORD** | **Train** | **Validation** | **Test** | |------------------|----------------|---------------------|---------------| | # Articles | 135 | 24 | 42 | | # Tokens | 80,531 | 10,233 | 25,957 | | # LOC entities | 1,255 | 267 | 428 | | # ORG entities | 3,440 | 524 | 933 | | # PER entities | 1,374 | 222 | 466 | ## Dataset Creation and Annotation [Information in paper ]() ## Additional Information ### Licensing Information [Information in paper ]() ### Citation Information ``` @article{shah2023finer, title={FiNER: Financial Named Entity Recognition Dataset and Weak-supervision Model}, author={Agam Shah and Ruchit Vithani and Abhinav Gullapalli and Sudheer Chava}, journal={arXiv preprint arXiv:2302.11157}, year={2023} } ``` ### Contact Information Please contact Agam Shah (ashah482[at]gatech[dot]edu) or Ruchit Vithani (rvithani6[at]gatech[dot]edu) about any FiNER-related issues and questions. GitHub: [@shahagam4](https://github.com/shahagam4), [@ruchit2801](https://github.com/ruchit2801) Website: [https://shahagam4.github.io/](https://shahagam4.github.io/)
false
# Dataset Card for "thainer-corpus-v2" Thai Named Entity Recognition Corpus Home Page: [https://pythainlp.github.io/Thai-NER/version/2](https://pythainlp.github.io/Thai-NER/version/2) Training script and split data: [https://zenodo.org/record/7761354](https://zenodo.org/record/7761354) **You can download .conll to train named entity model in [https://zenodo.org/record/7761354](https://zenodo.org/record/7761354).** **Size** - Train: 3,938 docs - Validation: 1,313 docs - Test: 1,313 Docs Some data come from crowdsourcing between Dec 2018 - Nov 2019. [https://github.com/wannaphong/thai-ner](https://github.com/wannaphong/thai-ner) **Domain** - News (It, politics, economy, social) - PR (KKU news) - general **Source** - I use sone data from Nutcha’s theses (http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) and improve data by rechecking and adding more tagging. - Twitter - Blognone.com - It news - thaigov.go.th - kku.ac.th And more (the lists are lost.) **Tag** - DATA - date - TIME - time - EMAIL - email - LEN - length - LOCATION - Location - ORGANIZATION - Company / Organization - PERSON - Person name - PHONE - phone number - TEMPERATURE - temperature - URL - URL - ZIP - Zip code - MONEY - the amount - LAW - legislation - PERCENT - PERCENT Download: [HuggingFace Hub](https://huggingface.co/datasets/pythainlp/thainer-corpus-v2) ## Cite > Wannaphong Phatthiyaphaibun. (2022). Thai NER 2.0 (2.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7761354 or BibTeX ``` @dataset{wannaphong_phatthiyaphaibun_2022_7761354, author = {Wannaphong Phatthiyaphaibun}, title = {Thai NER 2.0}, month = sep, year = 2022, publisher = {Zenodo}, version = {2.0}, doi = {10.5281/zenodo.7761354}, url = {https://doi.org/10.5281/zenodo.7761354} } ```
false
# Dataset Card for "AMIsum" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) - ## Dataset Description - **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary AMIsum is meeting summaryzation dataset based on the AMI Meeting Corpus (https://groups.inf.ed.ac.uk/ami/corpus/). The dataset utilizes the transcripts as the source data and abstract summaries as the target data. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages English ## Dataset Structure ### Data Instances ``` {'transcript': '<PM> Okay. <PM> Right. <PM> Um well this is the kick-off meeting for our our project. <PM> Um and um this is just what we're gonna be doing over the next twenty five minutes. <ME> Mm-hmm. <PM> Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. <PM> Do you want to introduce yourself again? <ME> Great. [...]', 'summary': 'The project manager introduced the upcoming project to the team members and then the team members participated in an exercise in which they drew their favorite animal and discussed what they liked about the animal. The project manager talked about the project finances and selling prices. The team then discussed various features to consider in making the remote.', 'id': 'ES2002a', ``` ### Data Fields ``` transcript: Expert generated transcript. summary: Expert generated summary. id: Meeting id. ``` ### Data Splits |train|validation|test| |:----|:---------|:---| |97|20|20|
false
# Dataset Card for "tner/multinerd" ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Paper:** [https://aclanthology.org/2022.findings-naacl.60/](https://aclanthology.org/2022.findings-naacl.60/) - **Dataset:** MultiNERD - **Domain:** Wikipedia, WikiNews - **Number of Entity:** 18 ### Dataset Summary MultiNERD NER benchmark dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. - Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`, `SUPER`, `PHY` ## Dataset Structure ### Data Instances An example of `train` of `de` looks as follows. ``` { 'tokens': [ "Die", "Blätter", "des", "Huflattichs", "sind", "leicht", "mit", "den", "sehr", "ähnlichen", "Blättern", "der", "Weißen", "Pestwurz", "(", "\"", "Petasites", "albus", "\"", ")", "zu", "verwechseln", "." ], 'tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0 ] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/multinerd/raw/main/dataset/label.json). ```python { "O": 0, "B-PER": 1, "I-PER": 2, "B-LOC": 3, "I-LOC": 4, "B-ORG": 5, "I-ORG": 6, "B-ANIM": 7, "I-ANIM": 8, "B-BIO": 9, "I-BIO": 10, "B-CEL": 11, "I-CEL": 12, "B-DIS": 13, "I-DIS": 14, "B-EVE": 15, "I-EVE": 16, "B-FOOD": 17, "I-FOOD": 18, "B-INST": 19, "I-INST": 20, "B-MEDIA": 21, "I-MEDIA": 22, "B-PLANT": 23, "I-PLANT": 24, "B-MYTH": 25, "I-MYTH": 26, "B-TIME": 27, "I-TIME": 28, "B-VEHI": 29, "I-VEHI": 30, "B-SUPER": 31, "I-SUPER": 32, "B-PHY": 33, "I-PHY": 34 } ``` ### Data Splits | language | test | |:-----------|-------:| | de | 156792 | | en | 164144 | | es | 173189 | | fr | 176185 | | it | 181927 | | nl | 171711 | | pl | 194965 | | pt | 177565 | | ru | 82858 | ### Citation Information ``` @inproceedings{tedeschi-navigli-2022-multinerd, title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)", author = "Tedeschi, Simone and Navigli, Roberto", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-naacl.60", doi = "10.18653/v1/2022.findings-naacl.60", pages = "801--812", abstract = "Named Entity Recognition (NER) is the task of identifying named entities in texts and classifying them through specific semantic categories, a process which is crucial for a wide range of NLP applications. Current datasets for NER focus mainly on coarse-grained entity types, tend to consider a single textual genre and to cover a narrow set of languages, thus limiting the general applicability of NER systems.In this work, we design a new methodology for automatically producing NER annotations, and address the aforementioned limitations by introducing a novel dataset that covers 10 languages, 15 NER categories and 2 textual genres.We also introduce a manually-annotated test set, and extensively evaluate the quality of our novel dataset on both this new test set and standard benchmarks for NER.In addition, in our dataset, we include: i) disambiguation information to enable the development of multilingual entity linking systems, and ii) image URLs to encourage the creation of multimodal systems.We release our dataset at https://github.com/Babelscape/multinerd.", } ```
true
https://github.com/Alicia-Parrish/ling_in_loop/ ```bib @inproceedings{parrish-etal-2021-putting-linguist, title = "Does Putting a Linguist in the Loop Improve {NLU} Data Collection?", author = "Parrish, Alicia and Huang, William and Agha, Omar and Lee, Soo-Hwan and Nangia, Nikita and Warstadt, Alexia and Aggarwal, Karmanya and Allaway, Emily and Linzen, Tal and Bowman, Samuel R.", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.421", doi = "10.18653/v1/2021.findings-emnlp.421", pages = "4886--4901", } ```
false
# IVA Swift GitHub Code Dataset ## Dataset Description This is the curated train split of IVA Swift dataset extracted from GitHub. It contains curated Swift files gathered with the purpose to train a code generation model. The dataset consists of 320000 Swift code files from GitHub. [Here is the unsliced curated dataset](https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean) and [here is the raw dataset](https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint). ### How to use it To download the full dataset: ```python from datasets import load_dataset dataset = load_dataset('mvasiliniuc/iva-swift-codeint-clean', split='train') ``` ## Data Structure ### Data Fields |Field|Type|Description| |---|---|---| |repo_name|string|name of the GitHub repository| |path|string|path of the file in GitHub repository| |copies|string|number of occurrences in dataset| |content|string|content of source file| |size|string|size of the source file in bytes| |license|string|license of GitHub repository| |hash|string|Hash of content field.| |line_mean|number|Mean line length of the content. |line_max|number|Max line length of the content. |alpha_frac|number|Fraction between mean and max line length of content. |ratio|number|Character/token ratio of the file with tokenizer. |autogenerated|boolean|True if the content is autogenerated by looking for keywords in the first few lines of the file. |config_or_test|boolean|True if the content is a configuration file or a unit test. |has_no_keywords|boolean|True if a file has none of the keywords for Swift Programming Language. |has_few_assignments|boolean|True if file uses symbol '=' less than `minimum` times. ### Instance ```json { "repo_name":"...", "path":".../BorderedButton.swift", "copies":"2", "size":"2649", "content":"...", "license":"mit", "hash":"db1587fd117e9a835f58cf8203d8bf05", "line_mean":29.1136363636, "line_max":87, "alpha_frac":0.6700641752, "ratio":5.298, "autogenerated":false, "config_or_test":false, "has_no_keywords":false, "has_few_assignments":false } ``` ## Languages The dataset contains only Swift files. ```json { "Swift": [".swift"] } ``` ## Licenses Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences. ```json { "agpl-3.0":1415, "apache-2.0":71451, "artistic-2.0":169, "bsd-2-clause":2628, "bsd-3-clause":5492, "cc0-1.0":1176, "epl-1.0":498, "gpl-2.0":7846, "gpl-3.0":15716, "isc":676, "lgpl-2.1":932, "lgpl-3.0":2553, "mit":201134, "mpl-2.0":6846, "unlicense":1468 } ``` ## Dataset Statistics ```json { "Total size": "~453 MB", "Number of files": 320000, "Number of files under 500 bytes": 3116, "Average file size in bytes": 5940, } ``` ## Curation Process See [the unsliced curated dataset](https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean) for mode details. ## Data Splits The dataset only contains a train split focused only on training data. For validation and unspliced versions, please check the following links: * Clean Version Unsliced: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean * Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean-valid # Considerations for Using the Data The dataset comprises source code from various repositories, potentially containing harmful or biased code, along with sensitive information such as passwords or usernames.
false
# Dataset Card for ADE 20K Tiny This is a tiny subset of the ADE 20K dataset, which you can find [here](https://huggingface.co/datasets/scene_parse_150).
true
# Dataset Card for DocEE Dataset ## Dataset Description - **Homepage:** - **Repository:** [DocEE Dataset repository](https://github.com/tongmeihan1995/docee) - **Paper:** [DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction](https://aclanthology.org/2022.naacl-main.291/) ### Dataset Summary DocEE dataset is an English-language dataset containing more than 27k news and Wikipedia articles. Dataset is primarily annotated and collected for large-scale document-level event extraction. ### Data Fields - `title`: TODO - `text`: TODO - `event_type`: TODO - `date`: TODO - `metadata`: TODO **Note: this repo contains only event detection portion of the dataset.** ### Data Splits The dataset has 2 splits: _train_ and _test_. Train split contains 21949 documents while test split contains 5536 documents. In total, dataset contains 27485 documents classified into 59 event types. #### Differences from the original split(s) Originally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types. Originally, the `title` column additionally contained information from `date` and `metadata` columns. They are now separated into three columns: `date`, `metadata` and `title`.
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# MIRACL (ja) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-ja-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
true
# Dataset Card for "fake-news-detection-dataset-english" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
true
# Dataset Card for Criticality Prediction ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Legal Criticality Prediction (LCP) is a multilingual, diachronic dataset of 139K Swiss Federal Supreme Court (FSCS) cases annotated with two criticality labels. The bge_label i a binary label (critical, non-critical), while the citation label has 5 classes (critical-1, critical-2, critical-3, critical-4, non-critical). Critical classes of the citation_label are distinct subsets of the critical class of the bge_label. This dataset creates a challenging text classification task. We also provide additional metadata as the publication year, the law area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP. ### Supported Tasks and Leaderboards LCP can be used as text classification task ### Languages Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings. German (91k), French (33k), Italian (15k) ## Dataset Structure ``` { "decision_id": "008d8a52-f0ea-4820-a18c-d06066dbb407", "language": "fr", "year": "2018", "chamber": "CH_BGer_004", "region": "Federation", "origin_chamber": "338.0", "origin_court": "127.0", "origin_canton": "24.0", "law_area": "civil_law", "law_sub_area": , "bge_label": "critical", "citation_label": "critical-1", "facts": "Faits : A. A.a. Le 17 août 2007, C.X._, née le 14 février 1944 et domiciliée...", "considerations": "Considérant en droit : 1. Interjeté en temps utile (art. 100 al. 1 LTF) par les défendeurs qui ont succombé dans leurs conclusions (art. 76 LTF) contre une décision...", "rulings": "Par ces motifs, le Tribunal fédéral prononce : 1. Le recours est rejeté. 2. Les frais judiciaires, arrêtés à 10'000 fr., sont mis solidairement à la charge des recourants...", } ``` ### Data Fields ``` decision_id: (str) a unique identifier of the for the document language: (str) one of (de, fr, it) year: (int) the publication year chamber: (str) the chamber of the case region: (str) the region of the case origin_chamber: (str) the chamber of the origin case origin_court: (str) the court of the origin case origin_canton: (str) the canton of the origin case law_area: (str) the law area of the case law_sub_area:(str) the law sub area of the case bge_label: (str) critical or non-critical citation_label: (str) critical-1, critical-2, critical-3, critical-4, non-critical facts: (str) the facts of the case considerations: (str) the considerations of the case rulings: (str) the rulings of the case ``` ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits The dataset was split date-stratisfied - Train: 2002-2015 - Validation: 2016-2017 - Test: 2018-2022 | Language | Subset | Number of Documents (Training/Validation/Test) | |------------|------------|--------------------------------------------| | German | **de** | 81'264 (56592 / 19601 / 5071) | | French | **fr** | 49'354 (29263 / 11117 / 8974) | | Italian | **it** | 7913 (5220 / 1901 / 792) | ## Dataset Creation ### Curation Rationale The dataset was created by Stern (2023). ### Source Data #### Initial Data Collection and Normalization The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process bge_label: 1. all bger_references in the bge header were extracted (for bge see rcds/swiss_rulings). 2. bger file_names are compared with the found references citation_label: 1. count all citations for all bger cases and weight citations 2. divide cited cases in four different classes, depending on amount of citations #### Who are the annotators? Stern processed data and introduced bge and citation-label Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch). ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2002-2022 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information *Visu, Ronja, Joel* *Title: Blabliblablu* *Name of conference* ``` cit ``` ### Contributions Thanks to [@Stern5497](https://github.com/stern5497) for adding this dataset.
true
# ml4pubmed/pubmed-classification-20k - 20k subset of pubmed text classification from course
false
# Dataset Card for "km-speech-corpus" ``` sampling_rate: 16000 mean_seconds: 2.5068187111021882 max_seconds: 19.392 min_seconds: 0.448 total_seconds: 37459.392 total_hrs: 10.405386666666667 ```
true
Mindgame dataset Code: https://github.com/sileod/llm-theory-of-mind Article: https://arxiv.org/abs/2305.03353 ``` @article{sileo2023mindgames, title={MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic}, author={Sileo, Damien and Lernould, Antoine}, journal={arXiv preprint arXiv:2305.03353}, year={2023} } ```
true
``` @inproceedings{van2012designing, title={Designing a scalable crowdsourcing platform}, author={Van Pelt, Chris and Sorokin, Alex}, booktitle={Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data}, pages={765--766}, year={2012} } ```
true
# Dataset Card for TeCla ## Dataset Description - **Website:** [Zenodo](https://zenodo.org/record/7334110) - **Point of Contact:** [Irene Baucells de la Peña](irene.baucells@bsc.es), [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es) ### Dataset Summary TeCla (Text Classification) is a Catalan News corpus for thematic multi-class Text Classification tasks. The present version (2.0) contains 113.376 articles classified under a hierarchical class structure consisting of a coarse-grained and a fine-grained class. Each of the 4 coarse-grained classes accept a subset of fine-grained ones, 53 in total. The previous version (1.0.1) can still be found at https://zenodo.org/record/4761505 This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/). ### Supported Tasks and Leaderboards Text classification, Language Model ### Languages The dataset is in Catalan (`ca-CA`). ## Dataset Structure ### Data Instances Three json files, one for each split. ### Data Fields Each example contains the following 3 fields: * text: the article text (string) * label1: the coarse-grained class * label2: the fine-grained class #### Example: <pre> {"version": "2.0", "data": [ { 'sentence': "La setena edició del Festival Fantàstik inclourà les cintes 'Matar a dios' i 'Mandy' i un homenatge a 'Mi vecino Totoro'. Es projectaran 22 curtmetratges seleccionats d'entre més de 500 presentats a nivell internacional. El Centre Cultural de Granollers acull del 8 a l'11 de novembre la setena edició del Festival Fantàstik. El certamen, que s'allargarà un dia, arrencarà amb la projecció de la cinta de Caye Casas i Albert Pide 'Matar a Dios'. Els dos directors estaran presents en la inauguració de la cita. A més, els asssitents podran gaudir de 'Mandy', el darrer treball de Nicolas Cage. Altres llargmetratges seleccionats per aquest any són 'Aterrados' (2017), 'Revenge' (2017), 'A Mata Negra' (2018), 'Top Knot Detective' (2018) i 'La Gran Desfeta' (2018). A més, amb motiu del trentè aniversari de la pel·lícula 'El meu veí Totoro' es durà a terme l'exposició dedicada a aquest film '30 anys 30 artistes' comissariada per Jordi Pastor i Reinaldo Pereira. La mostra '30 anys 30 artistes' recull els treballs de trenta artistes d'estils diferents al voltant de la figura de Totoro i el seu director. Es podrà veure durant els dies de festival i es complementarà amb la projecció de la pel·lícula el diumenge 11 de novembre. Al llarg del festival també es projectaran els 22 curtmetratges prèviament seleccionats d'entre més de 500 presentats a nivell internacional. El millor tindrà una dotació de 1000 euros fruit de la unió de forces amb el Mercat Audiovisual de Catalunya.", 'label1': 'Cultura', 'label2': 'Cinema' }, ... ] } </pre> #### Labels * label1: 'Societat', 'Política', 'Economia', 'Cultura' * label2: 'Llengua', 'Infraestructures', 'Arts', 'Parlament', 'Noves tecnologies', 'Castells', 'Successos', 'Empresa', 'Mobilitat', 'Teatre', 'Treball', 'Logística', 'Urbanisme', 'Govern', 'Entitats', 'Finances', 'Govern espanyol', 'Trànsit', 'Indústria', 'Esports', 'Exteriors', 'Medi ambient', 'Habitatge', 'Salut', 'Equipaments i patrimoni', 'Recerca', 'Cooperació', 'Innovació', 'Agroalimentació', 'Policial', 'Serveis Socials', 'Cinema', 'Memòria històrica', 'Turisme', 'Política municipal', 'Comerç', 'Universitats', 'Hisenda', 'Judicial', 'Partits', 'Música', 'Lletres', 'Religió', 'Festa i cultura popular', 'Unió Europea', 'Moda', 'Moviments socials', 'Comptes públics', 'Immigració', 'Educació', 'Gastronomia', 'Meteorologia', 'Energia' ### Data Splits Train, development and test splits were created in a stratified fashion, following a 0.8, 0.05 and 0.15 proportion, respectively. The sizes of each split are the following: * train.json: 90700 examples * dev.json: 5669 examples * test.json: 17007 examples ## Dataset Creation ### Curation Rationale We created this dataset to contribute to the development of language models in Catalan, a low-resource language. ### Source Data #### Initial Data Collection and Normalization The source data are crawled articles from the Catalan News Agency ([Agència Catalana de Notícies, ACN](https://www.acn.cat/)) site. We crawled 219.586 articles from the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) newswire archive, the latest from October 11, 2020. From the crawled data, we selected those articles whose 'section' and 'subsection' categories followed the expected codification combinations included in the ACN's style guide and whose 'section' complied the requirements of containing subsections and being thematically founded (in contrast to geographically defined categories such as 'Món' and 'Unió Europea'). The articles originally belonging to the 'Unió Europea' section, which were related to political organisms from the European Union, were included in the 'Política' coarse-grained category (within a fine-grained category named 'Unió Europea') due to its close proximity between some of the original subsections of 'Política' and those of 'Unió Europea', both defined by the specific political organism dealt with in the article. The text field in each example is a concatenation of the original title, subtitle and body of the article (before the concatenation, both title and subtitle were added a final dot whenever they lacked one). The preprocessing of the texts was minimal and consisted in the removal of the pattern "ACN {location}.-" included before the body in each text as well as newlines originally used to divide the text in paragraphs. #### Who are the source language producers? The Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) is a news agency owned by the Catalan government via the public corporation Intracatalònia, SA. It is one of the first digital news agencies created in Europe and has been operating since 1999 (source: [wikipedia](https://en.wikipedia.org/wiki/Catalan_News_Agency)). ### Annotations #### Annotation process The crawled data contained the categories' annotations, which were then used to create this dataset with the mentioned criteria. #### Who are the annotators? Editorial staff classified the articles under the different thematic sections and subsections, and we extracted these from metadata. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset We hope this dataset contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Irene Baucells (irene.baucells@bsc.es), Casimiro Pio Carrino (casimiro.carrino@bsc.es), Carlos Rodríguez (carlos.rodriguez1@bsc.es) and Carme Armentano (carme.armentano@bsc.es), from [BSC-CNS](https://www.bsc.es/). This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Licensing Information This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>. ### Citation Information [DOI]([https://doi.org/10.5281/zenodo.7334110])
true
# Dataset This dataset contains positive , negative and notr sentences from several data sources given in the references. In the most sentiment models , there are only two labels; positive and negative. However , user input can be totally notr sentence. For such cases there were no data I could find. Therefore I created this dataset with 3 class. Positive and negative sentences are listed below. Notr examples are extraced from turkish wiki dump. In addition, added some random text inputs like "Lorem ipsum dolor sit amet.". There are 492.782 labeled sentences. %10 of them used for testing. # Türkçe Duygu Analizi Veriseti Bu veriseti , farklı kaynaklardan derlenmiş pozitif , negatif ve nötr sınıflardan örnekler içerir. Bir çok verisetinde sadece pozitif ve negatif bulunur. Fakat kullanıcı input'u nötr olabilir. Bu tarz durumlar için türkçe bir dataset bulmakta zorlandım. Dolayısıyla , 3 sınıftan oluşan bu dataseti oluşturdum. Pozitif ve negatif örnekleri aldığın kaynaklar referans kısmında listelenmiştir. Nötr cümleler ise wikipedia datasından alınmıştır. Ek olarak bazı rastgele inputlar nötr olarak eklenmiştir. Örneğin: "Lorem ipsum dolor sit amet.". There are 492.782 labeled sentences. %10 of them used for testing. # References - https://www.kaggle.com/burhanbilenn/duygu-analizi-icin-urun-yorumlari - https://github.com/fthbrmnby/turkish-text-data - https://www.kaggle.com/mustfkeskin/turkish-wikipedia-dump - https://github.com/ezgisubasi/turkish-tweets-sentiment-analysis - http://humirapps.cs.hacettepe.edu.tr/
false
# Dataset Card for [Electricity Transformer Temperature](https://github.com/zhouhaoyi/ETDataset) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Electricity Transformer Dataset](https://github.com/zhouhaoyi/ETDataset) - **Repository:** https://github.com/zhouhaoyi/ETDataset - **Paper:** [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) - **Point of Contact:** [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn) ### Dataset Summary The electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data. Specifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an `1H` (hourly) or `15T` (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points. The target time series is the **O**il **T**emperature and the dataset comes with the following 6 covariates in the univariate setup: * **H**igh **U**se**F**ul **L**oad * **H**igh **U**se**L**ess **L**oad * **M**iddle **U**se**F**ul **L**oad * **M**iddle **U**se**L**ess **L**oad * **L**ow **U**se**F**ul **L**oad * **L**ow **U**se**L**ess **L**oad ### Dataset Usage To load a particular variant of the dataset just specify its name e.g: ```python load_dataset("ett", "m1", multivariate=False) # univariate 15-min frequency dataset from first transformer ``` or to specify a prediction length: ```python load_dataset("ett", "h2", prediction_length=48) # multivariate dataset from second transformer with prediction length of 48 (hours) ``` ### Supported Tasks and Leaderboards The time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets. #### `time-series-forecasting` ##### `univariate-time-series-forecasting` The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split. The covriates are stored in the `feat_dynamic_real` key of each time series. ##### `multivariate-time-series-forecasting` The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split. ### Languages ## Dataset Structure ### Data Instances A sample from the training set is provided below: ```python { 'start': datetime.datetime(2012, 1, 1, 0, 0), 'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...], 'feat_static_cat': [0], 'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...], 'item_id': 'OT' } ``` ### Data Fields For the univariate regular time series each series has the following keys: * `start`: a datetime of the first entry of each time series in the dataset * `target`: an array[float32] of the actual target values * `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset * `feat_dynamic_real`: optional array of covariate features * `item_id`: a string identifier of each time series in a dataset for reference For the multivariate time series the `target` is a vector of the multivariate dimension for each time point. ### Data Splits The time series data is split into train/val/test set of 12/4/4 months respectively. ## Dataset Creation ### Curation Rationale Develop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators * [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn) ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ```tex @inproceedings{haoyietal-informer-2021, author = {Haoyi Zhou and Shanghang Zhang and Jieqi Peng and Shuai Zhang and Jianxin Li and Hui Xiong and Wancai Zhang}, title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting}, booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference}, volume = {35}, number = {12}, pages = {11106--11115}, publisher = {{AAAI} Press}, year = {2021}, } ``` ### Contributions Thanks to [@kashif](https://github.com/kashif) for adding this dataset.
false
# Dataset Card for "tydiqa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3726.74 MB - **Size of the generated dataset:** 5812.92 MB - **Total amount of disk used:** 9539.67 MB ### Dataset Summary TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### primary_task - **Size of downloaded dataset files:** 1863.37 MB - **Size of the generated dataset:** 5757.59 MB - **Total amount of disk used:** 7620.96 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "annotations": { "minimal_answers_end_byte": [-1, -1, -1], "minimal_answers_start_byte": [-1, -1, -1], "passage_answer_candidate_index": [-1, -1, -1], "yes_no_answer": ["NONE", "NONE", "NONE"] }, "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...", "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร", "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...", "language": "thai", "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...", "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..." } ``` ### Data Fields The data fields are the same among all splits. #### primary_task - `passage_answer_candidates`: a dictionary feature containing: - `plaintext_start_byte`: a `int32` feature. - `plaintext_end_byte`: a `int32` feature. - `question_text`: a `string` feature. - `document_title`: a `string` feature. - `language`: a `string` feature. - `annotations`: a dictionary feature containing: - `passage_answer_candidate_index`: a `int32` feature. - `minimal_answers_start_byte`: a `int32` feature. - `minimal_answers_end_byte`: a `int32` feature. - `yes_no_answer`: a `string` feature. - `document_plaintext`: a `string` feature. - `document_url`: a `string` feature. ### Data Splits | name | train | validation | | -------------- | -----: | ---------: | | primary_task | 166916 | 18670 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{tydiqa, title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} year = {2020}, journal = {Transactions of the Association for Computational Linguistics} } ``` ``` @inproceedings{ruder-etal-2021-xtreme, title = "{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation", author = "Ruder, Sebastian and Constant, Noah and Botha, Jan and Siddhant, Aditya and Firat, Orhan and Fu, Jinlan and Liu, Pengfei and Hu, Junjie and Garrette, Dan and Neubig, Graham and Johnson, Melvin", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.802", doi = "10.18653/v1/2021.emnlp-main.802", pages = "10215--10245", } } ```
false
# Dataset Card for aeroBERT-NER ## Dataset Description - **Paper:** aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT - **Point of Contact:** archanatikayatray@gmail.com ### Dataset Summary This dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme. There are a total of 1432 sentences. The creation of this dataset is aimed at - <br> (1) Making available an **open-source** dataset for aerospace requirements which are often proprietary <br> (2) Fine-tuning language models for **token identification** (NER) specific to the aerospace domain <br> This dataset can be used for training or fine-tuning language models for the identification of mentioned Named-Entities in aerospace texts. ## Dataset Structure The dataset is of the format: ``Sentence-Number * WordPiece-Token * NER-tag`` <br> "*" is used as a delimiter to avoid confusion with commas (",") that occur in the text. The following example shows the dataset structure for Sentence #1431. <br> 1431\*the\*O <br> 1431\*airplane\*B-SYS <br> 1431\*takeoff\*O <br> 1431\*performance\*O <br> 1431\*must\*O <br> 1431\*be\*O <br> 1431\*determined\*O <br> 1431\*for\*O <br> 1431\*climb\*O <br> 1431\*gradients\*O <br> 1431\*.\*O <br> ## Dataset Creation ### Source Data Two types of aerospace texts are used to create the aerospace corpus for fine-tuning BERT: <br> (1) general aerospace texts such as publications by the National Academy of Space Studies Board, and <br> (2) certification requirements from Title 14 CFR. A total of 1432 sentences from the aerospace domain were included in the corpus. <br> ### Importing dataset into Python environment Use the following code chunk to import the dataset into Python environment as a DataFrame. ``` from datasets import load_dataset import pandas as pd dataset = load_dataset("archanatikayatray/aeroBERT-NER") #Converting the dataset into a pandas DataFrame dataset = pd.DataFrame(dataset["train"]["text"]) dataset = dataset[0].str.split('*', expand = True) #Getting the headers from the first row header = dataset.iloc[0] #Excluding the first row since it contains the headers dataset = dataset[1:] #Assigning the header to the DataFrame dataset.columns = header #Viewing the last 10 rows of the annotated dataset dataset.tail(10) ``` ### Annotations #### Annotation process A Subject Matter Expert (SME) was consulted for deciding on the annotation categories. The BIO Tagging scheme was used for annotating the dataset. **B** - Beginning of entity <br> **I** - Inside an entity <br> **O** - Outside an entity <br> | Category | NER Tags | Example | | :----: | :----: | :----: | | System | B-SYS, I-SYS | exhaust heat exchangers, powerplant, auxiliary power unit | | Value | B-VAL, I-VAL | 1.2 percent, 400 feet, 10 to 19 passengers | | Date time | B-DATETIME, I-DATETIME | 2013, 2019, May 11,1991 | | Organization | B-ORG, I-ORG | DOD, Ames Research Center, NOAA | | Resource | B-RES, I-RES | Section 25-341, Sections 25-173 through 25-177, Part 23 subpart B | The distribution of the various entities in the corpus is shown below - <br> |NER Tag|Description|Count| | :----: | :----: | :----: | O | Tokens that are not identified as any NE | 37686 | B-SYS | Beginning of a system NE | 1915 | I-SYS | Inside a system NE | 1104 | B-VAL | Beginning of a value NE | 659 | I-VAL | Inside a value NE | 507 | B-DATETIME| Beginning of a date time NE | 147 | I-DATETIME | Inside a date time NE | 63 | B-ORG | Beginning of an organization NE | 302 | I-ORG | Inside a organization NE | 227 | B-RES | Beginning of a resource NE |390 | I-RES | Inside a resource NE | 1033 | ### Limitations (1)The dataset is an imbalanced dataset, given that's how language is (not every word is a Named-Entity). Hence, using ``Accuracy`` as a metric for the model performance is NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation. (2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment. Please refer to the Appendix of the paper for information on the test set. ### Citation Information ``` @Article{aeroBERT-NER, AUTHOR = {Tikayat Ray, Archana and Pinon Fischer, Olivia J. and Mavris, Dimitri N. and White, Ryan T. and Cole, Bjorn F.}, TITLE = {aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT}, JOURNAL = {AIAA SCITECH 2023 Forum}, YEAR = {2023}, URL = {https://arc.aiaa.org/doi/10.2514/6.2023-2583}, DOI = {10.2514/6.2023-2583} } @phdthesis{tikayatray_thesis, author = {Tikayat Ray, Archana}, title = {Standardization of Engineering Requirements Using Large Language Models}, school = {Georgia Institute of Technology}, year = {2023}, doi = {10.13140/RG.2.2.17792.40961}, URL = {https://repository.gatech.edu/items/964c73e3-f0a8-487d-a3fa-a0988c840d04} } ```
false
# Dataset Card for the High-Level Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description The High-Level (HL) dataset aligns **object-centric descriptions** from [COCO](https://arxiv.org/pdf/1405.0312.pdf) with **high-level descriptions** crowdsourced along 3 axes: **_scene_, _action_, _rationale_** The HL dataset contains 149997 images from COCO and a total of 134973 crowdsourced captions (3 captions for each axis) aligned with ~749984 object-centric captions from COCO. Each axis is collected by asking the following 3 questions: 1) Where is the picture taken? 2) What is the subject doing? 3) Why is the subject doing it? **The high-level descriptions capture the human interpretations of the images**. These interpretations contain abstract concepts not directly linked to physical objects. Each high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which the high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption is close to the commonsense (in a Likert scale from 1-5). - **🗃️ Repository:** [github.com/michelecafagna26/HL-dataset](https://github.com/michelecafagna26/HL-dataset) - **📜 Paper:** [HL Dataset: Grounding High-Level Linguistic Concepts in Vision](https://arxiv.org/pdf/2302.12189.pdf) - **🧭 Spaces:** [Dataset explorer](https://huggingface.co/spaces/michelecafagna26/High-Level-Dataset-explorer) - **🖊️ Contact:** michele.cafagna@um.edu.mt ### Supported Tasks - image captioning - visual question answering - multimodal text-scoring - zero-shot evaluation ### Languages English ## Dataset Structure The dataset is provided with images from COCO and two metadata jsonl files containing the annotations ### Data Instances An instance looks like this: ```json { "file_name": "COCO_train2014_000000138878.jpg", "captions": { "scene": [ "in a car", "the picture is taken in a car", "in an office." ], "action": [ "posing for a photo", "the person is posing for a photo", "he's sitting in an armchair." ], "rationale": [ "to have a picture of himself", "he wants to share it with his friends", "he's working and took a professional photo." ], "object": [ "A man sitting in a car while wearing a shirt and tie.", "A man in a car wearing a dress shirt and tie.", "a man in glasses is wearing a tie", "Man sitting in the car seat with button up and tie", "A man in glasses and a tie is near a window." ] }, "confidence": { "scene": [ 5, 5, 4 ], "action": [ 5, 5, 4 ], "rationale": [ 5, 5, 4 ] }, "purity": { "scene": [ -1.1760284900665283, -1.0889461040496826, -1.442818284034729 ], "action": [ -1.0115827322006226, -0.5917857885360718, -1.6931917667388916 ], "rationale": [ -1.0546956062316895, -0.9740906357765198, -1.2204363346099854 ] }, "diversity": { "scene": 25.965358893403383, "action": 32.713305568898775, "rationale": 2.658757840479801 } } ``` ### Data Fields - ```file_name```: original COCO filename - ```captions```: Dict containing all the captions for the image. Each axis can be accessed with the axis name and it contains a list of captions. - ```confidence```: Dict containing the captions confidence scores. Each axis can be accessed with the axis name and it contains a list of captions. Confidence scores are not provided for the _object_ axis (COCO captions).t - ```purity score```: Dict containing the captions purity scores. The purity score measures the semantic similarity of the captions within the same axis (Bleurt-based). - ```diversity score```: Dict containing the captions diversity scores. The diversity score measures the lexical diversity of the captions within the same axis (Self-BLEU-based). ### Data Splits There are 14997 images and 134973 high-level captions split into: - Train-val: 13498 images and 121482 high-level captions - Test: 1499 images and 13491 high-level captions ## Dataset Creation The dataset has been crowdsourced on Amazon Mechanical Turk. From the paper: >We randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to _actions_ and _rationales_ we need to > ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing > at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload into batches in order to ease >the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis. ### Curation Rationale From the paper: >In this work, we tackle the issue of **grounding high-level linguistic concepts in the visual modality**, proposing the High-Level (HL) Dataset: a V\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_. The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions >used in current V\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions >from subjective interpretations and we characterize our data under a variety of semantic and lexical aspects. ### Source Data - Images: COCO - object axis annotations: COCO - scene, action, rationale annotations: crowdsourced - confidence scores: crowdsourced - purity score and diversity score: automatically computed #### Annotation process From the paper: >**Pilot:** We run a pilot study with the double goal of collecting feedback and defining the task instructions. >With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform. >We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the >annotation in bulk. The final annotation form is shown in Appendix D. >***Procedure:*** The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_ > i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use >their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover, >differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities >in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported >in Figure 1. For details regarding the annotation costs see Appendix A. #### Who are the annotators? Turkers from Amazon Mechanical Turk ### Personal and Sensitive Information There is no personal or sensitive information ## Considerations for Using the Data [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations From the paper: >**Quantitying grammatical errors:** We ask two expert annotators to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two annotators. > The annotators are shown the image caption pairs and they are asked to edit the caption whenever they identify a grammatical error. >The most common errors reported by the annotators are: >- Misuse of prepositions >- Wrong verb conjugation >- Pronoun omissions >In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance (Levenshtein, 1966) between them. >We observe that 22.5\% of the sample has been edited and only 5\% with a Levenshtein distance greater than 10. This suggests a reasonable >level of grammatical quality overall, with no substantial grammatical problems. This can also be observed from the Levenshtein distance >distribution reported in Figure 2. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement >(alpha = 0.507, (Krippendorff, 2018) computed over the shared sample. ### Dataset Curators Michele Cafagna ### Licensing Information The Images and the object-centric captions follow the [COCO terms of Use](https://cocodataset.org/#termsofuse) The remaining annotations are licensed under Apache-2.0 license. ### Citation Information ```BibTeX @inproceedings{Cafagna2023HLDG, title={HL Dataset: Grounding High-Level Linguistic Concepts in Vision}, author={Michele Cafagna and Kees van Deemter and Albert Gatt}, year={2023} } ```
false
# Dataset Card for Multilingual Complex Named Entity Recognition (MultiCoNER) ## Dataset Description - **Homepage:** https://multiconer.github.io - **Repository:** - **Paper:** - **Leaderboard:** https://multiconer.github.io/results, https://codalab.lisn.upsaclay.fr/competitions/10025 - **Point of Contact:** https://multiconer.github.io/organizers ### Dataset Summary The tagset of MultiCoNER is a fine-grained tagset. The fine to coarse level mapping of the tags are as follows: * Location (LOC) : Facility, OtherLOC, HumanSettlement, Station * Creative Work (CW) : VisualWork, MusicalWork, WrittenWork, ArtWork, Software * Group (GRP) : MusicalGRP, PublicCORP, PrivateCORP, AerospaceManufacturer, SportsGRP, CarManufacturer, ORG * Person (PER) : Scientist, Artist, Athlete, Politician, Cleric, SportsManager, OtherPER * Product (PROD) : Clothing, Vehicle, Food, Drink, OtherPROD * Medical (MED) : Medication/Vaccine, MedicalProcedure, AnatomicalStructure, Symptom, Disease ### Supported Tasks and Leaderboards The final leaderboard of the shared task is available <a href="https://multiconer.github.io/results" target="_blank">here</a>. ### Languages Supported languages are Bangla, Chinese, English, Spanish, Farsi, French, German, Hindi, Italian, Portuguese, Swedish, Ukrainian. ## Dataset Structure The dataset follows CoNLL format. ### Data Instances Here are some examples in different languages: * Bangla: [লিটল মিক্স | MusicalGrp] এ যোগদানের আগে তিনি [পিৎজা হাট | ORG] এ ওয়েট্রেস হিসাবে কাজ করেছিলেন। * Chinese: 它的纤维穿过 [锁骨 | AnatomicalStructure] 并沿颈部侧面倾斜向上和内侧. * English: [wes anderson | Artist]'s film [the grand budapest hotel | VisualWork] opened the festival . * Farsi: است] ناگویا |HumanSettlement] مرکزاین استان شهر * French: l [amiral de coligny | Politician] réussit à s y glisser . * German: in [frühgeborenes | Disease] führt dies zu [irds | Symptom] . * Hindi: १७९६ में उन्हें [शाही स्वीडिश विज्ञान अकादमी | Facility] का सदस्य चुना गया। * Italian: è conservato nel [rijksmuseum | Facility] di [amsterdam | HumanSettlement] . * Portuguese: também é utilizado para se fazer [licor | Drink] e [vinhos | Drink]. * Spanish: fue superado por el [aon center | Facility] de [los ángeles | HumanSettlement] . * Swedish: [tom hamilton | Artist] amerikansk musiker basist i [aerosmith | MusicalGRP] . * Ukrainian: назва альбому походить з роману « [кінець дитинства | WrittenWork] » англійського письменника [артура кларка | Artist] . ### Data Fields The data has two fields. One is the token and another is the label. Here is an example from the English data. ``` # id f5458a3a-cd23-4df4-8384-4e23fe33a66b domain=en doris _ _ B-Artist day _ _ I-Artist included _ _ O in _ _ O the _ _ O album _ _ O billy _ _ B-MusicalWork rose _ _ I-MusicalWork 's _ _ I-MusicalWork jumbo _ _ I-MusicalWork ``` ### Data Splits Train, Dev, and Test splits are provided ## Dataset Creation TBD ### Licensing Information CC BY 4.0 ### Citation Information ``` @inproceedings{multiconer2-report, title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}}, author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin}, booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)}, year={2023}, publisher={Association for Computational Linguistics}, } @article{multiconer2-data, title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}}, author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin}, year={2023}, } ```
false
false
false
# Dataset Card for SAMSum Corpus (ru) ## Dataset Description Translated [samsum](https://huggingface.co/datasets/samsum) dataset to russian language. ### Notes > Row with ID **13828807** was deleted. ### Links - **Homepage:** hhttps://arxiv.org/abs/1911.12237v2 - **Repository:** https://arxiv.org/abs/1911.12237v2 - **Paper:** https://arxiv.org/abs/1911.12237v2 ### Languages Russian (translated from English [samsum](https://huggingface.co/datasets/samsum) using Google Translator) ## Dataset Structure ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 14731 - val: 818 - test: 819 ## Licensing Information non-commercial licence: CC BY-NC-ND 4.0 ## Citation Information ``` @inproceedings{gliwa-etal-2019-samsum, title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization", author = "Gliwa, Bogdan and Mochol, Iwona and Biesek, Maciej and Wawer, Aleksander", booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-5409", doi = "10.18653/v1/D19-5409", pages = "70--79" } ```
false
Dataset Number = 9002 Data Cleaned = Yes Data Size = 77MB Number of Prompts = 69,885 Number of Columns = 2 [Prompt, Answer]
false
# Dataset Card for bc2gm_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://biocreative.bioinformatics.udel.edu/resources/biocreative-iv/chemdner-corpus/) - **Repository:** [Github](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BC4CHEMD) - **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4331692/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards * Token Classification * Named Entity Recognition ### Languages - English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `id`: Sentence identifier. - `tokens`: Array of tokens composing a sentence. - `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens. ### Data Splits ```python DatasetDict({ train: Dataset({ features: ['id', 'tokens', 'ner_tags'], num_rows: 30683 }) validation: Dataset({ features: ['id', 'tokens', 'ner_tags'], num_rows: 30640 }) test: Dataset({ features: ['id', 'tokens', 'ner_tags'], num_rows: 26365 }) }) ``` ## Dataset Creation ### Curation Rationale The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] ### Annotations #### Annotation process We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. #### Who are the annotators? Expert chemistry literature curators ### Personal and Sensitive Information It does not contain this kind of information The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. ### Licensing Information Unknown ### Citation Information ```latex @article{Krallinger2015TheCC, title={The CHEMDNER corpus of chemicals and drugs and its annotation principles}, author={Martin Krallinger and Obdulia Rabal and Florian Leitner and Miguel Vazquez and David Salgado and Zhiyong Lu and Robert Leaman and Yanan Lu and Dong-Hong Ji and Daniel M. Lowe and Roger A. Sayle and Riza Theresa Batista-Navarro and Rafal Rak and Torsten Huber and Tim Rockt{\"a}schel and S{\'e}rgio Matos and David Campos and Buzhou Tang and Hua Xu and Tsendsuren Munkhdalai and Keun Ho Ryu and S. V. Ramanan and P. Senthil Nathan and Slavko Zitnik and Marko Bajec and Lutz Weber and Matthias Irmer and Saber Ahmad Akhondi and Jan A. Kors and Shuo Xu and Xin An and Utpal Kumar Sikdar and Asif Ekbal and Masaharu Yoshioka and Thaer M. Dieb and Miji Choi and Karin M. Verspoor and Madian Khabsa and C. Lee Giles and Hongfang Liu and K. E. Ravikumar and Andre Lamurias and Francisco M. Couto and Hong-Jie Dai and Richard Tzong-Han Tsai and C Ata and Tolga Can and Anabel Usie and Rui Alves and Isabel Segura-Bedmar and Paloma Mart{\'i}nez and Julen Oyarz{\'a}bal and Alfonso Valencia}, journal={Journal of Cheminformatics}, year={2015}, volume={7}, pages={S2 - S2} } ``` ### Contributions Thanks to [@GamalC](https://github.com/GamalC) for uploading this dataset to GitHub.
false
# Dataset Card for "PAQ_pairs" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/facebookresearch/PAQ](https://github.com/facebookresearch/PAQ) - **Repository:** [More Information Needed](https://github.com/facebookresearch/PAQ) - **Paper:** [More Information Needed](https://github.com/facebookresearch/PAQ) - **Point of Contact:** [More Information Needed](https://github.com/facebookresearch/PAQ) - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** 21 Bytes ### Dataset Summary Pairs questions and answers obtained from Wikipedia. Disclaimer: The team releasing PAQ QA pairs did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". The first sentence is a question and the second an answer; thus, both sentences would be similar. ``` {"set": [sentence_1, sentence_2]} {"set": [sentence_1, sentence_2]} ... {"set": [sentence_1, sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/PAQ_pairs") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 64371441 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Data Instances [More Information Needed](https://github.com/facebookresearch/PAQ) ### Data Fields [More Information Needed](https://github.com/facebookresearch/PAQ) ### Data Splits [More Information Needed](https://github.com/facebookresearch/PAQ) ## Dataset Creation [More Information Needed](https://github.com/facebookresearch/PAQ) ### Curation Rationale [More Information Needed](https://github.com/facebookresearch/PAQ) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/facebookresearch/PAQ) #### Who are the source language producers? [More Information Needed](https://github.com/facebookresearch/PAQ) ### Annotations #### Annotation process [More Information Needed](https://github.com/facebookresearch/PAQ) #### Who are the annotators? [More Information Needed](https://github.com/facebookresearch/PAQ) ### Personal and Sensitive Information [More Information Needed](https://github.com/facebookresearch/PAQ) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/facebookresearch/PAQ) ### Discussion of Biases [More Information Needed](https://github.com/facebookresearch/PAQ) ### Other Known Limitations [More Information Needed](https://github.com/facebookresearch/PAQ) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/facebookresearch/PAQ) ### Licensing Information The PAQ QA-pairs and metadata is licensed under [CC-BY-SA](https://creativecommons.org/licenses/by-sa/3.0/). Other data is licensed according to the accompanying license files. ### Citation Information ``` @article{lewis2021paq, title={PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them}, author={Patrick Lewis and Yuxiang Wu and Linqing Liu and Pasquale Minervini and Heinrich Küttler and Aleksandra Piktus and Pontus Stenetorp and Sebastian Riedel}, year={2021}, eprint={2102.07033}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@patrick-s-h-lewis](https://github.com/patrick-s-h-lewis) for adding this dataset.
false
**NOTE:** This is a machine translated version of the [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) dataset. # Dataset Card for Alpaca-Cleaned - **Repository:** https://github.com/gururise/AlpacaDataCleaned ## Dataset Description This is a cleaned version of the original Alpaca Dataset released by Stanford. The following issues have been identified in the original release and fixed in this dataset: 1. **Hallucinations:** Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer. ``` "instruction":"Summarize the given article in 200 Words.", "input": "https://www.bbc.com/news/world-51461830", "output": "The recent protests in Belarus are part of a growing wave of discontent against the rule of Alexander Lukashenko, who has led the country since 1994. Demonstrations against his government have been ongoing since August, and come in the wake of an election widely seen as rigged. As police and security forces have cracked down on protesters, the movement has only grown, with hundreds of thousands of people taking to the streets to call for new leadership. Human rights activists have accused the government of using extreme violence in order prosecute protesters, leading to an international outcry and sanctions from the European Union. As the situation in Belarus continues to unfold, the international community is monitoring the situation closely - hoping for a peaceful resolution, while also standing firm against the oppressive tactics employed by the Belarusian state." "instruction": "Classify the following shape according to simple geometry.", "input": "<Picture Attached>", "output": "Triangle" "instruction": "Given a recent news article, write a headline for it.", "input": "https://www.news.example.com/article-1", "output": "\"NASA Launches a Mars 2020 Mission to Search for Signs of Ancient Life\"" ``` 2. **Merged Instructions:** There were many instructions that were merged together in the original dataset for some reason. ``` "instruction": "Should I invest in stocks?", "input": "", "output": "It depends on your individual situation. Investing in stocks carries a certain amount of risk but can also offer potential rewards. Make sure to research the different types of stocks, understand the risks and rewards, and develop a plan that works for you.\n##\n18. Instruction: Identify the noun in the following sentence:\n18. Input:\nThe large apple was delicious.\n18. Output:\nApple" ``` 3. **Empty outputs:** Some entries in the original dataset had empty outputs. ``` "instruction": "Calculate the area of the following shape in square centimeters.", "input": "rectangle of size 4 cm x 5 cm", "output": "" ``` 4. **Empty code examples:** Some descriptions in the original dataset were missing code examples, making it difficult to understand the intended behavior of the code. 5. **Instructions to generate images:** Some descriptions in the original dataset included instructions to generate images, something obviously not possible. ``` "instruction": "Create a graphic or logo that visually represents the word \"courage\".", "input": "", "output": "<No Output>" ``` 6. **N/A outputs:** Some code snippets in the original dataset had N/A outputs. 7. **Inconsistent input field:** The original dataset had inconsistent usage of the input field when it was supposed to be empty. ``` "input":"<no input>" "input":"No input" "input":"noinput" "input":"<noinput>" ``` 8. **Wrong answers:** Some instructions/questions in the original dataset had incorrect answers. About 80% of the math problems are estimated to have incorrect answers. ``` "instruction": "Calculate the median of the following data set.", "input": "1, 2, 4, 5, 8, 9", "output": "5" "instruction": "Convert 25m to km.", "input": "", "output": "25km" ``` 9. **Non-Sensical/Unclear instructions:** Many instructions are unclear, we try to clarify (or re-write) if instructions are non-sensical. Instructions that are slightly unclear, but where one could deduce the meaning are not altered. ``` "instruction": "Freeze the following sample of yogurt for 10 minutes.", "input": "Yogurt sample", "output": "<noinput>" "instruction": "Increase the font size to 12 points.", "input": "", "output": "The font size has been increased to 12 points." ``` 10. **Extraneous escape and control characters:** The original dataset had several entries with extraneous escape and control characters. ### Original Alpaca Dataset Summary Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications: - The `text-davinci-003` engine to generate the instruction data instead of `davinci`. - A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`. - Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation. - The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions. - Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct. This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500). In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl). ### Supported Tasks and Leaderboards The Alpaca dataset designed for instruction training pretrained language models. ### Languages The data in Alpaca are in English (BCP-47 en). ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "instruction": "Create a classification task by clustering the given list of items.", "input": "Apples, oranges, bananas, strawberries, pineapples", "output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples", "text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples", } ``` ### Data Fields The data fields are as follows: * `instruction`: describes the task the model should perform. Each of the 52K instructions is unique. * `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input. * `output`: the answer to the instruction as generated by `text-davinci-003`. * `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models. ### Data Splits | | train | |---------------|------:| | alpaca | 52002 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset: > We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models. ### Discussion of Biases [More Information Needed] ### Other Known Limitations The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ``` ### Contributions [More Information Needed]
true
# Dataset Card for Swiss Leading Decisions ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Swiss Leading Decisions is a multilingual, diachronic dataset of 21K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text classification task. We also provide additional metadata as the publication year, the law area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP. ### Supported Tasks and Leaderboards Swiss Leading Decisions hepled in a text classification task ### Languages Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings. | Language | Subset | Number of Documents | |------------|------------|----------------------| | German | **de** | 14K | | French | **fr** | 6K | | Italian | **it** | 1K | ## Dataset Structure ### Data Fields ``` decision_id: (str) a unique identifier of the for the document language: (int64) one of (0,1,2) chamber_id: (int64) id to identfy chamber file_id: (int64) id to identify file date: (int64) topic: (string) year: (float64) language: (string) facts: (string) text section of the full text facts_num_tokens_bert: (int64) facts_num_tokens_spacy: (int64) considerations: (string) text section of the full text considerations_num_tokens_bert: (int64) considerations_num_tokens_spacy: (int64) rulings: (string) text section of the full text rulings_num_tokens_bert: (int64) rulings_num_tokens_spacy: (int64) chamber (string): court: (string) canton: (string) region: (string) file_name: (string) html_url: (string) pdf_url: (string) file_number: (string) ``` ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits ## Dataset Creation ### Curation Rationale The dataset was created by Stern (2023). ### Source Data #### Initial Data Collection and Normalization The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process #### Who are the annotators? Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch). ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2002-2022 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information *Visu, Ronja, Joel* *Title: Blabliblablu* *Name of conference* ``` cit ``` ### Contributions Thanks to [@Stern5497](https://github.com/stern5497) for adding this dataset.
false
# Dataset Card for "final_training_set_v1" Finetuning datasets for [WangChanGLM](https://github.com/pythainlp/wangchanglm) sourced from [LAION OIG chip2 and infill_dbpedia](https://huggingface.co/datasets/laion/OIG) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [DataBricks Dolly v2](https://github.com/databrickslabs/dolly) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [OpenAI TL;DR](https://github.com/openai/summarize-from-feedback) ([MIT](https://opensource.org/license/mit/)), and [Hello-SimpleAI HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) ([CC-BY SA](https://creativecommons.org/licenses/by-sa/4.0/))
false
# Dataset Card for "news-unmasked" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# DiffusionDB-Pixelart ## Table of Contents - [DiffusionDB](#diffusiondb) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Subset](#subset) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Dataset Metadata](#dataset-metadata) - [Metadata Schema](#metadata-schema) - [Data Splits](#data-splits) - [Loading Data Subsets](#loading-data-subsets) - [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb) - **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb) - **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb) - **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896) ### Dataset Summary **This is a subset of the DiffusionDB 2M dataset which has been turned into pixel-style art.** DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users. DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb). ### Supported Tasks and Leaderboards The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models. ### Languages The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian. ### Subset DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs. The pixelated version of the data was taken from the DiffusionDB 2M and has 2000 examples only. |Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table| |:--|--:|--:|--:|--:|--:| |DiffusionDB-pixelart|2k|~1.5k|~1.6GB|`images/`|`metadata.parquet`| Images in DiffusionDB-pixelart are stored in `png` format. ## Dataset Structure We use a modularized file structure to distribute DiffusionDB. The 2k images in DiffusionDB-pixelart are split into folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. ```bash # DiffusionDB 2k ./ ├── images │ ├── part-000001 │ │ ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png │ │ ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png │ │ ├── 66b428b9-55dc-4907-b116-55aaa887de30.png │ │ ├── [...] │ │ └── part-000001.json │ ├── part-000002 │ ├── part-000003 │ ├── [...] │ └── part-002000 └── metadata.parquet ``` These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB-pixelart). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters. ### Data Instances For example, below is the image of `ec9b5e2c-028e-48ac-8857-a52814fd2a06.png` and its key-value pair in `part-000001.json`. <img width="300" src="https://datasets-server.huggingface.co/assets/jainr3/diffusiondb-pixelart/--/2k_all/train/0/image/image.png"> ```json { "ec9b5e2c-028e-48ac-8857-a52814fd2a06.png": { "p": "doom eternal, game concept art, veins and worms, muscular, crustacean exoskeleton, chiroptera head, chiroptera ears, mecha, ferocious, fierce, hyperrealism, fine details, artstation, cgsociety, zbrush, no background ", "se": 3312523387, "c": 7.0, "st": 50, "sa": "k_euler" }, } ``` ### Data Fields - key: Unique image name - `p`: Text ### Dataset Metadata To help you easily access prompts and other attributes of images without downloading all the Zip files, we include a metadata table `metadata.parquet` for DiffusionDB-pixelart. Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table. Below are three random rows from `metadata.parquet`. | image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw | |:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:| | 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 | | a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 | | 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 | #### Metadata Schema `metadata.parquet` schema: |Column|Type|Description| |:---|:---|:---| |`image_name`|`string`|Image UUID filename.| |`text`|`string`|The text prompt used to generate this image.| > **Warning** > Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects. <img src="https://i.imgur.com/1RiGAXL.png" width="100%"> ### Data Splits For DiffusionDB-pixelart, we split 2k images into folders where each folder contains 1,000 images and a JSON file. ### Loading Data Subsets DiffusionDB is large! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary. #### Method 1: Using Hugging Face Datasets Loader You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train). ```python import numpy as np from datasets import load_dataset # Load the dataset with the `2k_random_1k` subset dataset = load_dataset('jainr3/diffusiondb-pixelart', '2k_random_1k') ``` ## Dataset Creation ### Curation Rationale Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos. However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt. Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images. To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs. ### Source Data #### Initial Data Collection and Normalization We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information. #### Who are the source language producers? The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion). ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information The authors removed the discord usernames from the dataset. We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop better understanding of large text-to-image generative models. The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models. It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB. ### Discussion of Biases The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images. ### Other Known Limitations **Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models. Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models. ## Additional Information ### Dataset Curators DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/). ### Licensing Information The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/). The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE). ### Citation Information ```bibtex @article{wangDiffusionDBLargescalePrompt2022, title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models}, author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng}, year = {2022}, journal = {arXiv:2210.14896 [cs]}, url = {https://arxiv.org/abs/2210.14896} } ``` ### Contributions If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact the original author [Jay Wang](https://zijie.wang).
true
https://github.com/wangcunxiang/Sen-Making-and-Explanation ``` @inproceedings{wang-etal-2019-make, title = "Does it Make Sense? And Why? A Pilot Study for Sense Making and Explanation", author = "Wang, Cunxiang and Liang, Shuailong and Zhang, Yue and Li, Xiaonan and Gao, Tian", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1393", pages = "4020--4026", abstract = "Introducing common sense to natural language understanding systems has received increasing research attention. It remains a fundamental question on how to evaluate whether a system has the sense-making capability. Existing benchmarks measure common sense knowledge indirectly or without reasoning. In this paper, we release a benchmark to directly test whether a system can differentiate natural language statements that make sense from those that do not make sense. In addition, a system is asked to identify the most crucial reason why a statement does not make sense. We evaluate models trained over large-scale language modeling tasks as well as human performance, showing that there are different challenges for system sense-making.", } ```
false
# Dataset Card for all_combined_bengali_252K ## Dataset Description - **Homepage: https://www.odiagenai.org/** - **Repository: https://github.com/OdiaGenAI** - **Point of Contact: Shantipriya Parida, and Sambit Sekhar** ### Dataset Summary This dataset is a mix of Bengali instruction sets translated from open-source instruction sets: * Dolly, * Alpaca, * ChatDoctor, * Roleplay * GSM In this dataset Bengali instruction, input, and output strings are available. ### Supported Tasks and Leaderboards Large Language Model (LLM) ### Languages Bengali ## Dataset Structure JSON ### Data Fields output (string) data_source (string) instruction (string) input (string) ### Licensing Information This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg ### Citation Information If you find this repository useful, please consider giving 👏 and citing: ``` @misc{OdiaGenAI, author = {Shantipriya Parida and Sambit Sekhar and Guneet Singh Kohli and Arghyadeep Sen and Shashikanta Sahoo}, title = {Bengali Instruction Set}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/OdiaGenAI}}, } ``` ### Contributions - Shantipriya Parida - Sambit Sekhar - Guneet Singh Kohli - Arghyadeep Sen - Shashikanta Sahoo
true
# Depression: Reddit Dataset (Cleaned) **~7000 Cleaned Reddit Labelled Dataset on Depression** ### Summary - The dataset provided is a Depression: Reddit Dataset (Cleaned) containing approximately 7,000 labeled instances. It consists of two main features: 'text' and 'label'. The 'text' feature contains the text data from Reddit posts related to depression, while the 'label' feature indicates whether a post is classified as depression or not. - The raw data for this dataset was collected by web scraping Subreddits. To ensure the data's quality and usefulness, multiple natural language processing (NLP) techniques were applied to clean the data. The dataset exclusively consists of English-language posts, and its primary purpose is to facilitate mental health classification tasks. - This dataset can be employed in various natural language processing tasks related to depression, such as sentiment analysis, topic modeling, text classification, or any other NLP task that requires labeled data pertaining to depression from Reddit. - Extracted from Kaggle: https://www.kaggle.com/datasets/infamouscoder/depression-reddit-cleaned
false
# Dataset Card for CUAD This is a modified version of original [CUAD](https://huggingface.co/datasets/cuad/blob/main/README.md) which trims the question to its label form. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad) - **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/) - **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268) - **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org) ### Dataset Summary Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset contains samples in English only. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [44], "text": ['DISTRIBUTOR AGREEMENT'] }, "context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...', "id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0", "question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract", "title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT" } ``` ### Data Fields - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits This dataset is split into train/test set. Number of samples in each set is given below: | | Train | Test | | ----- | ------ | ---- | | CUAD | 22450 | 4182 | ## Dataset Creation ### Curation Rationale A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring. Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies. To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack. ### Source Data #### Initial Data Collection and Normalization The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet. Type of Contracts: # of Docs Affiliate Agreement: 10 Agency Agreement: 13 Collaboration/Cooperation Agreement: 26 Co-Branding Agreement: 22 Consulting Agreement: 11 Development Agreement: 29 Distributor Agreement: 32 Endorsement Agreement: 24 Franchise Agreement: 15 Hosting Agreement: 20 IP Agreement: 17 Joint Venture Agreemen: 23 License Agreement: 33 Maintenance Agreement: 34 Manufacturing Agreement: 17 Marketing Agreement: 17 Non-Compete/No-Solicit/Non-Disparagement Agreement: 3 Outsourcing Agreement: 18 Promotion Agreement: 12 Reseller Agreement: 12 Service Agreement: 28 Sponsorship Agreement: 31 Supply Agreement: 18 Strategic Alliance Agreement: 32 Transportation Agreement: 13 TOTAL: 510 #### Who are the source language producers? The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD. ### Annotations #### Annotation process The labeling process included multiple steps to ensure accuracy: 1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours. 2. Law Student Label: law students conducted manual contract review and labeling in eBrevia. 3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step. 4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled. 5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly. 6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels. 7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer. #### Who are the annotators? Answered in above section. ### Personal and Sensitive Information Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”). For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”. For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”. Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows: THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION. Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category. To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.” Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Attorney Advisors Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu Law Student Leaders John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran Law Student Contributors Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin Technical Advisors & Contributors Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen ### Licensing Information CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use. The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR. Privacy Policy & Disclaimers The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved. The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer. ### Citation Information ``` @article{hendrycks2021cuad, title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball}, journal={arXiv preprint arXiv:2103.06268}, year={2021} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding the original CUAD dataset.
false
# Dataset Card for Nouns auto-captioned _Dataset used to train Nouns text to image model_ Automatically generated captions for Nouns from their attributes, colors and items. Help on the captioning script appreciated! For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided. ## Citation If you use this dataset, please cite it as: ``` @misc{piedrafita2022nouns, author = {Piedrafita, Miguel}, title = {Nouns auto-captioned}, year={2022}, howpublished= {\url{https://huggingface.co/datasets/m1guelpf/nouns/}} } ```
true
# Dataset Card for JSNLI [![CI](https://github.com/shunk031/huggingface-datasets_jsnli/actions/workflows/ci.yaml/badge.svg)](https://github.com/shunk031/huggingface-datasets_jsnli/actions/workflows/ci.yaml) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - Homepage: https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88 - Repository: https://github.com/shunk031/huggingface-datasets_jsnli ### Dataset Summary [日本語 SNLI(JSNLI) データセット - KUROHASHI-CHU-MURAWAKI LAB](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88 ) より: > 本データセットは自然言語推論 (NLI) の標準的ベンチマークである [SNLI](https://nlp.stanford.edu/projects/snli/) を日本語に翻訳したものです。 ### Dataset Preprocessing ### Supported Tasks and Leaderboards ### Languages 注釈はすべて日本語を主要言語としています。 ## Dataset Structure > データセットは TSV フォーマットで、各行がラベル、前提、仮説の三つ組を表します。前提、仮説は JUMAN++ によって形態素分割されています。以下に例をあげます。 ``` entailment 自転車 で 2 人 の 男性 が レース で 競い ます 。 人々 は 自転車 に 乗って います 。 ``` ### Data Instances ```python from datasets import load_dataset load_dataset("shunk031/jsnli", "without-filtering") ``` ```json { 'label': 'neutral', 'premise': 'ガレージ で 、 壁 に ナイフ を 投げる 男 。', 'hypothesis': '男 は 魔法 の ショー の ため に ナイフ を 投げる 行為 を 練習 して い ます 。' } ``` ### Data Fields ### Data Splits | name | train | validation | |-------------------|--------:|-----------:| | without-filtering | 548,014 | 3,916 | | with-filtering | 533,005 | 3,916 | ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process > SNLI に機械翻訳を適用した後、評価データにクラウドソーシングによる正確なフィルタリング、学習データに計算機による自動フィルタリングを施すことで構築されています。 > データセットは学習データを全くフィルタリングしていないものと、フィルタリングした中で最も精度が高かったものの 2 種類を公開しています。データサイズは、フィルタリング前の学習データが 548,014 ペア、フィルタリング後の学習データが 533,005 ペア、評価データは 3,916 ペアです。詳細は参考文献を参照してください。 #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information > 本データセットに関するご質問は nl-resource あっと nlp.ist.i.kyoto-u.ac.jp 宛にお願いいたします。 ### Dataset Curators ### Licensing Information > このデータセットのライセンスは、SNLI のライセンスと同じ [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) に従います。SNLI に関しては参考文献を参照してください。 ### Citation Information ```bibtex @article{吉越卓見 2020 機械翻訳を用いた自然言語推論データセットの多言語化, title={機械翻訳を用いた自然言語推論データセットの多言語化}, author={吉越卓見 and 河原大輔 and 黒橋禎夫 and others}, journal={研究報告自然言語処理 (NL)}, volume={2020}, number={6}, pages={1--8}, year={2020} } ``` ```bibtex @inproceedings{bowman2015large, title={A large annotated corpus for learning natural language inference}, author={Bowman, Samuel and Angeli, Gabor and Potts, Christopher and Manning, Christopher D}, booktitle={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing}, pages={632--642}, year={2015} } ``` ```bibtex @article{young2014image, title={From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions}, author={Young, Peter and Lai, Alice and Hodosh, Micah and Hockenmaier, Julia}, journal={Transactions of the Association for Computational Linguistics}, volume={2}, pages={67--78}, year={2014}, publisher={MIT Press} } ``` ### Contributions JSNLI データセットを公開してくださった吉越 卓見さま,河原 大輔さま,黒橋 禎夫さまに心から感謝します。
false
# Dataset Card for `BanglaRQA` ## Table of Contents - [Dataset Card for `BanglaRQA`](#dataset-card-for-BanglaRQA) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Usage](#usage) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** [https://github.com/sartajekram419/BanglaRQA](https://github.com/sartajekram419/BanglaRQA) - **Paper:** [BanglaRQA: A Benchmark Dataset for Under-resourced Bangla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types](https://aclanthology.org/2022.findings-emnlp.186) ### Dataset Summary This is a human-annotated Bangla Question Answering (QA) dataset with diverse question-answer types. ### Languages * `Bangla` ### Usage ```python from datasets import load_dataset dataset = load_dataset("sartajekram/BanglaRQA") ``` ## Dataset Structure ### Data Instances One example from the dataset is given below in JSON format. ``` { 'passage_id': 'bn_wiki_2977', 'title': 'ফাজিল পরীক্ষা', 'context': 'ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা। ফাজিল পরীক্ষা বাংলাদেশে ডিগ্রি সমমানের, কখনো স্নাতক সমমানের একটি পরীক্ষা, যা একটি ফাজিল মাদ্রাসায় অনুষ্ঠিত হয়ে থাকে। তবে ভারতে ফাজিল পরীক্ষাকে উচ্চ মাধ্যমিক শ্রেণীর (১১ বা ১২ ক্লাস) মান বলে বিবেচিত করা হয়। ফাজিল পরীক্ষা বাংলাদেশ ভারত ও পাকিস্তানের সরকারি স্বীকৃত আলিয়া মাদরাসায় প্রচলিত রয়েছে। বাংলাদেশের ফাজিল পরীক্ষা ইসলামি আরবি বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়ে থাকে ও ভারতের ফাজিল পরীক্ষা পশ্চিমবঙ্গ মাদ্রাসা শিক্ষা পর্ষদের অধীনে অনুষ্ঠিত হয়ে থাকে।\n\n১৯৪৭ সালে ঢাকা আলিয়া মাদ্রাসা ঢাকায় স্থানান্তরের পূর্বে বাংলাদেশ ও ভারতের ফাজিল পরীক্ষা কলকাতা আলিয়া মাদ্রাসার অধীনে অনুষ্ঠিত হতো। ফাযিল পরীক্ষা বর্তমানে ইসলামি আরবী বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়। যা পূর্বে মাদরাসা বোর্ড ও ইসলামি বিশ্ববিদ্যালয়ের আধীনে অনুষ্ঠিত হত। মাদ্রাসা-ই-আলিয়া ঢাকায় স্থানান্তরিত হলে ১৯৪৮ সালে মাদ্রাসা বোর্ডের ফাজিলগুলো পরীক্ষা ঢাকা বিশ্ববিদ্যালয় কর্তৃক গৃহীত হতো। ১৯৭৫ সালের কুদরত-এ-খুদা শিক্ষা কমিশনের সুপারিশে মাদ্রাসা বোর্ড নিয়ন্ত্রিত আলিয়া মাদ্রাসাসমূহে জাতীয় শিক্ষাক্রম ও বহুমুখী পাঠ্যসূচি প্রবর্তিত করা হয়। ১৯৮০ সালে অনুষ্ঠিত ফাজিল পরীক্ষায় এই পাঠ্যসুচী কার্যকর হয়। এই শিক্ষা কমিশন অনুসারে ফাজিল শ্রেণীতে ইসলামি শিক্ষার পাশাপাশি সাধারণ পাঠ্যসূচী অন্তর্ভুক্ত করে ফাজিল পরীক্ষাকে সাধারণ উচ্চ মাধ্যমিক এইচ এস সির সমমান ঘোষণা করা হয়।\n\n১৯৭৮ সালে অধ্যাপক মুস্তফা বিন কাসিমের নেতৃত্বে সিনিয়র মাদ্রাসা শিক্ষা ব্যবস্থা কমিটি গঠিত হয়। এই কমিটির নির্দেশনায় ১৯৮৪ সালে সাধারণ শিক্ষার স্তরের সঙ্গে বাংলাদেশ মাদ্রাসা বোর্ড নিয়ন্ত্রিত আলিয়া মাদ্রাসা শিক্ষা স্তরের সামঞ্জস্য করা হয়। ফাজিল স্তরকে ২ বছর মেয়াদী কোর্সে উন্নিত করে, মোট ১৬ বছর ব্যাপী আলিয়া মাদ্রাসার পূর্ণাঙ্গ আধুনিক শিক্ষা ব্যবস্থা প্রবর্তন করা হয়। এই কমিশনের মাধ্যমেই সরকার ফাজিল পরীক্ষাকে সাধারণ ডিগ্রি মান ঘোষণা করে।', 'question_id': 'bn_wiki_2977_01', 'question_text': 'ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা ?', 'is_answerable': '1', 'question_type': 'confirmation', 'answers': { 'answer_text': ['হ্যাঁ', 'হ্যাঁ '], 'answer_type': ['yes/no', 'yes/no'] }, } ``` ### Data Splits | split |count | |----------|--------| |`train`| 11,912 | |`validation`| 1,484 | |`test`| 1,493 | ## Additional Information ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use the dataset, please cite the following paper: ``` @inproceedings{ekram-etal-2022-banglarqa, title = "{B}angla{RQA}: A Benchmark Dataset for Under-resourced {B}angla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types", author = "Ekram, Syed Mohammed Sartaj and Rahman, Adham Arik and Altaf, Md. Sajid and Islam, Mohammed Saidul and Rahman, Mehrab Mustafy and Rahman, Md Mezbaur and Hossain, Md Azam and Kamal, Abu Raihan Mostofa", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-emnlp.186", pages = "2518--2532", abstract = "High-resource languages, such as English, have access to a plethora of datasets with various question-answer types resembling real-world reading comprehension. However, there is a severe lack of diverse and comprehensive question-answering datasets in under-resourced languages like Bangla. The ones available are either translated versions of English datasets with a niche answer format or created by human annotations focusing on a specific domain, question type, or answer type. To address these limitations, this paper introduces BanglaRQA, a reading comprehension-based Bangla question-answering dataset with various question-answer types. BanglaRQA consists of 3,000 context passages and 14,889 question-answer pairs created from those passages. The dataset comprises answerable and unanswerable questions covering four unique categories of questions and three types of answers. In addition, this paper also implemented four different Transformer models for question-answering on the proposed dataset. The best-performing model achieved an overall 62.42{\%} EM and 78.11{\%} F1 score. However, detailed analyses showed that the performance varies across question-answer types, leaving room for substantial improvement of the model performance. Furthermore, we demonstrated the effectiveness of BanglaRQA as a training resource by showing strong results on the bn{\_}squad dataset. Therefore, BanglaRQA has the potential to contribute to the advancement of future research by enhancing the capability of language models. The dataset and codes are available at https://github.com/sartajekram419/BanglaRQA", } ```
true
https://github.com/Dibyakanti/AutoTNLI-code ``` @inproceedings{kumar-etal-2022-autotnli, title = "Realistic Data Augmentation Framework for Enhancing Tabular Reasoning", author = "Kumar, Dibyakanti and Gupta, Vivek and Sharma, Soumya and Zhang, Shuo", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Online and Abu Dhabi", publisher = "Association for Computational Linguistics", url = "https://vgupta123.github.io/docs/autotnli.pdf", pages = "", abstract = "Existing approaches to constructing training data for Natural Language Inference (NLI) tasks, such as for semi-structured table reasoning, are either via crowdsourcing or fully automatic methods. However, the former is expensive and time-consuming and thus limits scale, and the latter often produces naive examples that may lack complex reasoning. This paper develops a realistic semi-automated framework for data augmentation for tabular inference. Instead of manually generating a hypothesis for each table, our methodology generates hypothesis templates transferable to similar tables. In addition, our framework entails the creation of rational counterfactual tables based on human written logical constraints and premise paraphrasing. For our case study, we use the InfoTabS (Gupta et al., 2020), which is an entity-centric tabular inference dataset. We observed that our framework could generate human-like tabular inference examples, which could benefit training data augmentation, especially in the scenario with limited supervision.", } ```
true
# Dataset Card for dynamically generated hate speech dataset ## Dataset Description - **Homepage:** [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset) - **Point of Contact:** [bertievidgen@gmail.com](mailto:bertievidgen@gmail.com) ### Dataset Summary This is a copy of the Dynamically-Generated-Hate-Speech-Dataset, presented in [this paper](https://arxiv.org/abs/2012.15761) by - **Bertie Vidgen**, **Tristan Thrush**, **Zeerak Waseem** and **Douwe Kiela** ## Original README from [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset/blob/main/README.md) ## Dynamically-Generated-Hate-Speech-Dataset ReadMe for v0.2 of the Dynamically Generated Hate Speech Dataset from Vidgen et al. (2021). If you use the dataset, please cite our paper in the Proceedings of ACL 2021, and available on [Arxiv](https://arxiv.org/abs/2012.15761). Contact Dr. Bertie Vidgen if you have feedback or queries: bertievidgen@gmail.com. The full author list is: Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook AI Research), Zeerak Waseem (University of Sheffield) and Douwe Kiela (Facebook AI Research). This paper is an output of the Dynabench project: https://dynabench.org/tasks/5#overall ### Dataset descriptions v0.2.2.csv is the full dataset used in our ACL paper. v0.2.3.csv removes duplicate entries, all of which occurred in round 1. Duplicates come from two sources: (1) annotators entering the same content multiple times and (2) different annotators entering the same content. The duplicates are interesting for understanding the annotation process, and the challenges of dynamically generating datasets. However, they are likely to be less useful for training classifiers and so are removed in v0.2.3. We did not lower case the text before removing duplicates as capitalisations contain potentially useful signals. ### Overview The Dynamically Generated Hate Speech Dataset is provided in one table. 'acl.id' is the unique ID of the entry. 'Text' is the content which has been entered. All content is synthetic. 'Label' is a binary variable, indicating whether or not the content has been identified as hateful. It takes two values: hate, nothate. 'Type' is a categorical variable, providing a secondary label for hateful content. For hate it can take five values: Animosity, Derogation, Dehumanization, Threatening and Support for Hateful Entities. Please see the paper for more detail. For nothate the 'type' is 'none'. In round 1 the 'type' was not given and is marked as 'notgiven'. 'Target' is a categorical variable, providing the group that is attacked by the hate. It can include intersectional characteristics and multiple groups can be identified. For nothate the type is 'none'. Note that in round 1 the 'target' was not given and is marked as 'notgiven'. 'Level' reports whether the entry is original content or a perturbation. 'Round' is a categorical variable. It gives the round of data entry (1, 2, 3 or 4) with a letter for whether the entry is original content ('a') or a perturbation ('b'). Perturbations were not made for round 1. 'Round.base' is a categorical variable. It gives the round of data entry, indicated with just a number (1, 2, 3 or 4). 'Split' is a categorical variable. it gives the data split that the entry has been assigned to. This can take the values 'train', 'dev' and 'test'. The choice of splits is explained in the paper. 'Annotator' is a categorical variable. It gives the annotator who entered the content. Annotator IDs are random alphanumeric strings. There are 20 annotators in the dataset. 'acl.id.matched' is the ID of the matched entry, connecting the original (given in 'acl.id') and the perturbed version. For identities (recorded under 'Target') we use shorthand labels to constructed the dataset, which can be converted (and grouped) as follows: none -> for non hateful entries NoTargetRecorded -> for hateful entries with no target recorded mixed -> Mixed race background ethnic minority -> Ethnic Minorities indig -> Indigenous people indigwom -> Indigenous Women non-white -> Non-whites (attacked as 'non-whites', rather than specific non-white groups which are generally addressed separately) trav -> Travellers (including Roma, gypsies) bla -> Black people blawom -> Black women blaman -> Black men african -> African (all 'African' attacks will also be an attack against Black people) jew -> Jewish people mus -> Muslims muswom -> Muslim women wom -> Women trans -> Trans people gendermin -> Gender minorities, bis -> Bisexual gay -> Gay people (both men and women) gayman -> Gay men gaywom -> Lesbians dis -> People with disabilities working -> Working class people old -> Elderly people asi -> Asians asiwom -> Asian women east -> East Asians south -> South Asians (e.g. Indians) chinese -> Chinese people pak -> Pakistanis arab -> Arabs, including people from the Middle East immig -> Immigrants asylum -> Asylum seekers ref -> Refguees for -> Foreigners eastern european -> Eastern Europeans russian -> Russian people pol -> Polish people hispanic -> Hispanic people, including latinx and Mexicans nazi -> Nazis ('Support' type of hate) hitler -> Hitler ('Support' type of hate) ### Code Code was implemented using hugging face transformers library. ## Additional Information ### Licensing Information The original repository does not provide any license, but is free for use with proper citation of the original paper in the Proceedings of ACL 2021, available on [Arxiv](https://arxiv.org/abs/2012.15761) ### Citation Information cite as [arXiv:2012.15761](https://arxiv.org/abs/2012.15761) or [https://doi.org/10.48550/arXiv.2012.15761](https://[doi.org/10.48550/arXiv.2012.15761)
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is aims for bittensor subnet 1 model. It contains around 3K record and it has group of 3 corresponding questions and answers in jsonl file format. Most of the unicode charecter is filtered out but some are there to add noise in the training data. ## Dataset Creation ### Source Data [https://huggingface.co/datasets/mrseeker87/bittensor_qa/] ## Contact [https://github.com/Kunj-2206]
false
# midjourney-v5-202304-clean ## 简介 Brief Introduction 非官方的,爬取自midjourney v5的2023年4月的数据,一共1701420条。 Unofficial, crawled from midjourney v5 for April 2023, 1,701,420 pairs in total. ## 数据集信息 Dataset Information 原始项目地址:https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset 我做了一些清洗,清理出了两个文件: - ori_prompts_df.parquet (1,255,812对,midjourney的四格图) ![ori_sample](https://cdn.discordapp.com/attachments/995431387333152778/1098283849076711424/mansonwu_A_charismatic_wealthy_young_man_is_fully_immersed_in_a_9bd4f414-eb40-4642-a381-f5ac56e99ec5.png) - upscaled_prompts_df.parquet (445,608对,使用了高清指令的图,这意味着这个图更受欢迎。) ![upscaled_sample](https://cdn.discordapp.com/attachments/984632520471633920/1105721768422948905/Tomberhood_The_intelligent_rescue_boat_on_the_beach_can_automat_e54faffe-0668-49e4-812d-713038bdc7bc.png) Original project address: https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset I did some cleaning and cleaned out two files: - ori_prompts_df.parquet (1,255,812 pairs, midjourney's four-frame diagrams) - upscaled_prompts_df.parquet (445,608 pairs, graphs that use the Upscale command, which means this one is more popular.)
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains datas being collected from Genbank. The dataset is organized in a way that it separate all the genes from an DNA , and was classified according to the region and coding type. In that way, people could get more detailed information regarding each DNA sequences. The dataset also contain source, which is the whole DNA sequence, where the user can use it to compare to each segment to see the exact location. The dataset contains 937 files with about 200 million data and 300-400 GB storage space. Therefore user can specify the number of files they are going to use by using the code below according to their own needs. If user want to download all of files, they can enter 937 as second arguement. ```python datasets.load_dataset('wyxu/Genome_database', num_urls = number of file you want to use) ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances ```python {DNA id: AP013063.1 Organism: Serratia marcescens SM39 year: 2017 region type:coding specific_class: Protein Product:thr operon leader peptide sequence: ATGCGCAACATCAGCCTGAAAACCACAATTATTACCACCACCGATACCACAGGTAACGGGGCGGGCTGA gc_content:0.52173913 translation code: MRNISLKTTIITTTDTTGNGAG start_position: 207 end_position: 276} ``` ### Data Fields __DNA id__: id number for the whole DNA sequence, sequences with same DNA id are from same DNA __Organism__: Organism of the DNA __year__: the year of the DNA sequence __region type__: determine the general type of the sequence. For all the type that is typically classified as coding region, it was named with coding; while others including those that are case dependent were named according to their own type such as regulator, repeat_region,gap, intron,extron, etc.(__Note__: when classifying coding type, all the CDS, mRNA, tmRNA, tRNA,rRNA and others such as propetide, sig_propetide,mat_propetide was classified as coding. In order to minimize the missing coding part, all the other categories which has associated product was also classified as coding ) __specific class__: if the sequence is coding sequence, it would be classified according to their production type such as RNA, Protein. The regulators would also be classified by their own class such as terminator, ribosome __Product__ : if the sequence produce protein, the product name would be listed __sequence__: the actual sequence __gc_content__: the gc_content of the sequence __translation code__: if the sequence produce protein, then the translation code would be provided as a reference __start_position__: the start position of the segment __end_position__: the end position of the segment ### Data Splits first 80% of files was used as training dataset, while last 20% was used as testing dataset ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The data collected are all from the most recent release of genbank, genbank 255. #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
# Dataset Card for Contextualized CommonGen(C2Gen) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Initial Data Collection and Normalization](#initial-cata-collection-and-normalization) - [Licensing Information](#licensing-information) ## Dataset Description - **Repository:** [Non-Residual Prompting](https://github.com/FreddeFrallan/Non-Residual-Prompting) - **Paper:** [Fine-Grained Controllable Text Generation Using Non-Residual Prompting](https://aclanthology.org/2022.acl-long.471) - **Point of Contact:** [Fredrik Carlsson](mailto:Fredrik.Carlsson@ri.se) ### Dataset Summary CommonGen [Lin et al., 2020](https://arxiv.org/abs/1911.03705) is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen [Carlsson et al., 2022](https://aclanthology.org/2022.acl-long.471) where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context. ### Languages English ## Dataset Structure ### Data Instances {"Context": "The show came on the television with people singing. The family all gathered to watch. They all became silent when the show came on.", "Words": ["follow", "series", "voice"]} ### Data Fields - context: the generated text by the model should adhere to this text - words: the words that should be included in the generated continuation ### Data Splits Test ## Dataset Creation ### Curation Rationale C2Gen was created because the authors of the paper believed that the task formulation of CommonGen is too narrow, and that it needlessly incentivizes researchers to focus on methods that do not support context. Which is orthogonal to their belief that many application areas necessitates the consideration of surrounding context. Therefore, to complement CommonGen, they provide an extended test set where an additional context is provided for each set of target words. ### Initial Data Collection and Normalization The dataset was constructed with the help the crowd sourcing platform MechanicalTurk. Each remaining concept set manually received a textual context. To assure the quality of the data generation, only native English speakers with a recorded high acceptance were allowed to participate. Finally, all contexts were manually verified, and fixed in terms of typos and poor quality. Furthermore we want to raise awareness that C2GEN can contain personal data or offensive content. If you would encounter such a sample, please reach out to us. ## Licensing Information license: cc-by-sa-4.0
false
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang Github repo: https://github.com/vipulraheja/IteraTeR
true
# Dataset Card for OpenQuestionType ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://shuyangcao.github.io/projects/ontology_open_ended_question/](https://shuyangcao.github.io/projects/ontology_open_ended_question/) - **Repository:** [https://github.com/ShuyangCao/open-ended_question_ontology](https://github.com/ShuyangCao/open-ended_question_ontology) - **Paper:** [https://aclanthology.org/2021.acl-long.502/](https://aclanthology.org/2021.acl-long.502/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Question types annotated on open-ended questions. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances An example looks as follows. ``` { "id": "123", "question": "A test question?", "annotator1": ["verification", None], "annotator2": ["concept", None], "resolve_type": "verification" } ``` ### Data Fields - `id`: a `string` feature. - `question`: a `string` feature. - `annotator1`: a sequence feature containing two elements. The first one is the most confident label by the first annotator and the second one is the second-most confident label by the first annotator. - `annotator2`: a sequence feature containing two elements. The first one is the most confident label by the second annotator and the second one is the second-most confident label by the second annotator. - `resolve_type`: a `string` feature which is the final label after resolving disagreement. ### Data Splits - train: 3716 - valid: 580 - test: 660 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Yahoo Answer and Reddit users. ### Personal and Sensitive Information None. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY 4.0 ### Citation Information ``` @inproceedings{cao-wang-2021-controllable, title = "Controllable Open-ended Question Generation with A New Question Type Ontology", author = "Cao, Shuyang and Wang, Lu", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.502", doi = "10.18653/v1/2021.acl-long.502", pages = "6424--6439", abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.", } ```
true
# Dataset Card for environmental_claims ## Dataset Description - **Homepage:** [climatebert.ai](https://climatebert.ai) - **Repository:** - **Paper:** [arxiv.org/abs/2209.00507](https://arxiv.org/abs/2209.00507) - **Leaderboard:** - **Point of Contact:** [Dominik Stammbach](mailto:dominsta@ethz.ch) ### Dataset Summary We introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies. ### Supported Tasks and Leaderboards The dataset supports a binary classification task of whether a given sentence is an environmental claim or not. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances ``` { "text": "It will enable E.ON to acquire and leverage a comprehensive understanding of the transfor- mation of the energy system and the interplay between the individual submarkets in regional and local energy supply sys- tems.", "label": 0 } ``` ### Data Fields - text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts - label: the label (0 -> no environmental claim, 1 -> environmental claim) ### Data Splits The dataset is split into: - train: 2,400 - validation: 300 - test: 300 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Our dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts. For more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for [citation](#citation-information). #### Who are the source language producers? Mainly large listed companies. ### Annotations #### Annotation process For more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for [citation](#citation-information). #### Who are the annotators? The authors and students at University of Zurich with majors in finance and sustainable finance. ### Personal and Sensitive Information Since our text sources contain public information, no personal and sensitive information should be included. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators - Dominik Stammbach - Nicolas Webersinke - Julia Anna Bingler - Mathias Kraus - Markus Leippold ### Licensing Information This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch). ### Citation Information ```bibtex @misc{stammbach2022environmentalclaims, title = {A Dataset for Detecting Real-World Environmental Claims}, author = {Stammbach, Dominik and Webersinke, Nicolas and Bingler, Julia Anna and Kraus, Mathias and Leippold, Markus}, year = {2022}, doi = {10.48550/ARXIV.2209.00507}, url = {https://arxiv.org/abs/2209.00507}, publisher = {arXiv}, } ``` ### Contributions Thanks to [@webersni](https://github.com/webersni) for adding this dataset.
false
# Dataset Card for KQA Pro ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Configs](#data-configs) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [How to run SPARQLs and programs](#how-to-run-sparqls-and-programs) - [Knowledge Graph File](#knowledge-graph-file) - [How to Submit to Leaderboard](#how-to-submit-results-of-test-set) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://thukeg.gitee.io/kqa-pro/ - **Repository:** https://github.com/shijx12/KQAPro_Baselines - **Paper:** [KQA Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base](https://aclanthology.org/2022.acl-long.422/) - **Leaderboard:** http://thukeg.gitee.io/kqa-pro/leaderboard.html - **Point of Contact:** shijx12 at gmail dot com ### Dataset Summary KQA Pro is a large-scale dataset of complex question answering over knowledge base. The questions are very diverse and challenging, requiring multiple reasoning capabilities including compositional reasoning, multi-hop reasoning, quantitative comparison, set operations, and etc. Strong supervisions of SPARQL and program are provided for each question. ### Supported Tasks and Leaderboards It supports knowlege graph based question answering. Specifically, it provides SPARQL and *program* for each question. ### Languages English ## Dataset Structure **train.json/val.json** ``` [ { 'question': str, 'sparql': str, # executable in our virtuoso engine 'program': [ { 'function': str, # function name 'dependencies': [int], # functional inputs, representing indices of the preceding functions 'inputs': [str], # textual inputs } ], 'choices': [str], # 10 answer choices 'answer': str, # golden answer } ] ``` **test.json** ``` [ { 'question': str, 'choices': [str], # 10 answer choices } ] ``` ### Data Configs This dataset has two configs: `train_val` and `test` because they have different available fields. Please specify this like `load_dataset('drt/kqa_pro', 'train_val')`. ### Data Splits train, val, test ## Additional Information ### Knowledge Graph File You can find the knowledge graph file `kb.json` in the original github repository. It comes with the format: ```json { 'concepts': { '<id>': { 'name': str, 'instanceOf': ['<id>', '<id>'], # ids of parent concept } }, 'entities': # excluding concepts { '<id>': { 'name': str, 'instanceOf': ['<id>', '<id>'], # ids of parent concept 'attributes': [ { 'key': str, # attribute key 'value': # attribute value { 'type': 'string'/'quantity'/'date'/'year', 'value': float/int/str, # float or int for quantity, int for year, 'yyyy/mm/dd' for date 'unit': str, # for quantity }, 'qualifiers': { '<qk>': # qualifier key, one key may have multiple corresponding qualifier values [ { 'type': 'string'/'quantity'/'date'/'year', 'value': float/int/str, 'unit': str, }, # the format of qualifier value is similar to attribute value ] } }, ] 'relations': [ { 'predicate': str, 'object': '<id>', # NOTE: it may be a concept id 'direction': 'forward'/'backward', 'qualifiers': { '<qk>': # qualifier key, one key may have multiple corresponding qualifier values [ { 'type': 'string'/'quantity'/'date'/'year', 'value': float/int/str, 'unit': str, }, # the format of qualifier value is similar to attribute value ] } }, ] } } } ``` ### How to run SPARQLs and programs We implement multiple baselines in our [codebase](https://github.com/shijx12/KQAPro_Baselines), which includes a supervised SPARQL parser and program parser. In the SPARQL parser, we implement a query engine based on [Virtuoso](https://github.com/openlink/virtuoso-opensource.git). You can install the engine based on our [instructions](https://github.com/shijx12/KQAPro_Baselines/blob/master/SPARQL/README.md), and then feed your predicted SPARQL to get the answer. In the program parser, we implement a rule-based program executor, which receives a predicted program and returns the answer. Detailed introductions of our functions can be found in our [paper](https://arxiv.org/abs/2007.03875). ### How to submit results of test set You need to predict answers for all questions of test set and write them in a text file **in order**, one per line. Here is an example: ``` Tron: Legacy Palm Beach County 1937-03-01 The Queen ... ``` Then you need to send the prediction file to us by email <caosl19@mails.tsinghua.edu.cn>, we will reply to you with the performance as soon as possible. To appear in the learderboard, you need to also provide following information: - model name - affiliation - open-ended or multiple-choice - whether use the supervision of SPARQL in your model or not - whether use the supervision of program in your model or not - single model or ensemble model - (optional) paper link - (optional) code link ### Licensing Information MIT License ### Citation Information If you find our dataset is helpful in your work, please cite us by ``` @inproceedings{KQAPro, title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base}, author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang}, booktitle={ACL'22}, year={2022} } ``` ### Contributions Thanks to [@happen2me](https://github.com/happen2me) for adding this dataset.
false
<div align="center"> <img width="640" alt="keremberke/valorant-object-detection" src="https://huggingface.co/datasets/keremberke/valorant-object-detection/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['dropped spike', 'enemy', 'planted spike', 'teammate'] ``` ### Number of Images ```json {'valid': 1983, 'train': 6927, 'test': 988} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/valorant-object-detection", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp/dataset/3](https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp/dataset/3?ref=roboflow2huggingface) ### Citation ``` @misc{ valorant-9ufcp_dataset, title = { valorant Dataset }, type = { Open Source Dataset }, author = { Daniels Magonis }, howpublished = { \\url{ https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp } }, url = { https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-01-27 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on December 22, 2022 at 5:10 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 9898 images. Planted are annotated in COCO format. The following pre-processing was applied to each image: * Resize to 416x416 (Stretch) No image augmentation techniques were applied.
false
# Dataset Card for feature vector embeddings of the 20newsgroup dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains dimensional reduced vector embeddings of the [20newsgroups dataset](http://qwone.com/~jason/20Newsgroups/). This dataset contains two dimensions. The dimensional reduced embeddings were created with the [TruncatedSVD function](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD) from the [scikit-learn library](https://scikit-learn.org/stable/index.html). These reduced feature vectors are based on the [fscheffczyk/20newsgroup_embeddings dataset](https://huggingface.co/datasets/fscheffczyk/20newsgroups_embeddings). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for CAMERA 📷 [![CI](https://github.com/shunk031/huggingface-datasets_CAMERA/actions/workflows/ci.yaml/badge.svg)](https://github.com/shunk031/huggingface-datasets_CAMERA/actions/workflows/ci.yaml) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/CyberAgentAILab/camera - **Repository:** https://github.com/shunk031/huggingface-datasets_CAMERA ### Dataset Summary From [the official README.md](https://github.com/CyberAgentAILab/camera#camera-dataset): > CAMERA (CyberAgent Multimodal Evaluation for Ad Text GeneRAtion) is the Japanese ad text generation dataset. We hope that our dataset will be useful in research for realizing more advanced ad text generation models. ### Supported Tasks and Leaderboards [More Information Needed] #### Supported Tasks [More Information Needed] #### Leaderboard [More Information Needed] ### Languages The language data in CAMERA is in Japanese ([BCP-47 ja-JP](https://www.rfc-editor.org/info/bcp47)). ## Dataset Structure ### Data Instances When loading a specific configuration, users has to append a version dependent suffix: #### without-lp-images ```python from datasets import load_dataset dataset = load_dataset("shunk031/CAMERA", name="without-lp-images") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation'], # num_rows: 12395 # }) # validation: Dataset({ # features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation'], # num_rows: 3098 # }) # test: Dataset({ # features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation'], # num_rows: 872 # }) # }) ``` An example of the CAMERA (w/o LP images) dataset looks as follows: ```json { "asset_id": 13861, "kw": "仙台 ホテル", "lp_meta_description": "仙台のホテルや旅館をお探しなら楽天トラベルへ!楽天ポイントが使えて、貯まって、とってもお得な宿泊予約サイトです。さらに割引クーポンも使える!国内ツアー・航空券・レンタカー・バス予約も!", "title_org": "仙台市のホテル", "title_ne1": "", "title_ne2": "", "title_ne3": "", "domain": "", "parsed_full_text_annotation": { "text": [ "trivago", "Oops...AccessDenied 可", "Youarenotallowedtoviewthispage!Ifyouthinkthisisanerror,pleasecontacttrivago.", "Errorcode:0.3c99e86e.1672026945.25ba640YourIP:240d:1a:4d8:2800:b9b0:ea86:2087:d141AffectedURL:https://www.trivago.jp/ja/odr/%E8%BB%92", "%E4%BB%99%E5%8F%B0-%E5%9B%BD%E5%86%85?search=20072325", "Backtotrivago" ], "xmax": [ 653, 838, 765, 773, 815, 649 ], "xmin": [ 547, 357, 433, 420, 378, 550 ], "ymax": [ 47, 390, 475, 558, 598, 663 ], "ymin": [ 18, 198, 439, 504, 566, 651 ] } } ``` #### with-lp-images ```python from datasets import load_dataset dataset = load_dataset("shunk031/CAMERA", name="with-lp-images") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation', 'lp_image'], # num_rows: 12395 # }) # validation: Dataset({ # features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation', 'lp_image'], # num_rows: 3098 # }) # test: Dataset({ # features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation', 'lp_image'], # num_rows: 872 # }) # }) ``` An example of the CAMERA (w/ LP images) dataset looks as follows: ```json { "asset_id": 13861, "kw": "仙台 ホテル", "lp_meta_description": "仙台のホテルや旅館をお探しなら楽天トラベルへ!楽天ポイントが使えて、貯まって、とってもお得な宿泊予約サイトです。さらに割引クーポンも使える!国内ツアー・航空券・レンタカー・バス予約も!", "title_org": "仙台市のホテル", "title_ne1": "", "title_ne2": "", "title_ne3": "", "domain": "", "parsed_full_text_annotation": { "text": [ "trivago", "Oops...AccessDenied 可", "Youarenotallowedtoviewthispage!Ifyouthinkthisisanerror,pleasecontacttrivago.", "Errorcode:0.3c99e86e.1672026945.25ba640YourIP:240d:1a:4d8:2800:b9b0:ea86:2087:d141AffectedURL:https://www.trivago.jp/ja/odr/%E8%BB%92", "%E4%BB%99%E5%8F%B0-%E5%9B%BD%E5%86%85?search=20072325", "Backtotrivago" ], "xmax": [ 653, 838, 765, 773, 815, 649 ], "xmin": [ 547, 357, 433, 420, 378, 550 ], "ymax": [ 47, 390, 475, 558, 598, 663 ], "ymin": [ 18, 198, 439, 504, 566, 651 ] }, "lp_image": <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1200x680 at 0x7F8513446B20> } ``` ### Data Fields #### without-lp-images - `asset_id`: ids (associated with LP images) - `kw`: search keyword - `lp_meta_description`: meta description extracted from LP (i.e., LP Text) - `title_org`: ad text (original gold reference) - `title_ne{1-3}`: ad text (additonal gold references for multi-reference evaluation) - `domain`: industry domain (HR, EC, Fin, Edu) for industry-wise evaluation - `parsed_full_text_annotation`: OCR results for LP images #### with-lp-images - `asset_id`: ids (associated with LP images) - `kw`: search keyword - `lp_meta_description`: meta description extracted from LP (i.e., LP Text) - `title_org`: ad text (original gold reference) - `title_ne{1-3}`: ad text (additional gold references for multi-reference evaluation) - `domain`: industry domain (HR, EC, Fin, Edu) for industry-wise evaluation - `parsed_full_text_annotation`: OCR results for LP images - `lp_image`: Landing page (LP) image ### Data Splits From [the official paper](https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/H11-4.pdf): | Split | # of data | # of reference ad text | industry domain label | |-------|----------:|-----------------------:|:---------------------:| | Train | 12,395 | 1 | - | | Valid | 3,098 | 1 | - | | Test | 869 | 4 | ✔ | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information [More Information Needed] ### Dataset Curators [More Information Needed] ### Licensing Information > This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. ### Citation Information ```bibtex @inproceedings{mita-et-al:nlp2023, author = "三田 雅人 and 村上 聡一朗 and 張 培楠", title = "広告文生成タスクの規定とベンチマーク構築", booktitle = "言語処理学会 第 29 回年次大会", year = 2023, } ``` ### Contributions Thanks to [Masato Mita](https://github.com/chemicaltree), [Soichiro Murakami](https://github.com/ichiroex), and [Peinan Zhang](https://github.com/peinan) for creating this dataset.
false
# Dataset Card for "genshin_ch_10npc" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
false
false
# PLOD: An Abbreviation Detection Dataset This is the repository for PLOD Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection. ### Dataset We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here. 1. The Filtered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/> 2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/> 3. The [SDU Shared Task](https://sites.google.com/view/sdu-aaai22/home) data we use for zero-shot testing is [available here](https://huggingface.co/datasets/surrey-nlp/SDU-test). # Dataset Card for PLOD-filtered ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection - **Paper:** https://arxiv.org/abs/2204.12061 - **Leaderboard:** https://paperswithcode.com/sota/abbreviationdetection-on-plod-filtered - **Point of Contact:** [Diptesh Kanojia](mailto:d.kanojia@surrey.ac.uk) ### Dataset Summary This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain. ### Supported Tasks and Leaderboards This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022. ### Languages English ## Dataset Structure ### Data Instances A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`. An example from the dataset: {'id': '1', 'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'], 'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13], 'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ### Data Fields - id: the row identifier for the dataset point. - tokens: The tokens contained in the text. - pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER. - ner_tags: The tags for abbreviations and long-forms. ### Data Splits | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | Filtered | 112652 | 24140 | 24140| | Unfiltered | 113860 | 24399 | 24399| ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization Extracting the data from PLOS Journals online and then tokenization, normalization. #### Who are the source language producers? PLOS Journal ## Additional Information ### Dataset Curators The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan. ### Licensing Information CC-BY-SA 4.0 ### Citation Information [Needs More Information] ### Installation We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/> Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy. OR<br/> However, you can also reproduce the experiments via the Python notebook we [provide here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection/blob/main/nbs/fine_tuning_abbr_det.ipynb) which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps: ```bash git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection cd PLOD-AbbreviationDetection pip install -r requirements.txt ``` Now, you can use the notebook to reproduce the experiments. ### Model(s) Our best performing models are hosted on the HuggingFace models repository | Models | [`PLOD - Unfiltered`](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) | [`PLOD - Filtered`](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) | Description | | --- | :---: | :---: | --- | | [RoBERTa<sub>large</sub>](https://huggingface.co/roberta-large) | [RoBERTa<sub>large</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | -soon- | Fine-tuning on the RoBERTa<sub>large</sub> language model | | [RoBERTa<sub>base</sub>](https://huggingface.co/roberta-base) | -soon- | [RoBERTa<sub>base</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | Fine-tuning on the RoBERTa<sub>base</sub> language model | | [AlBERT<sub>large-v2</sub>](https://huggingface.co/albert-large-v2) | [AlBERT<sub>large-v2</sub>-finetuned-abbDet](https://huggingface.co/surrey-nlp/albert-large-v2-finetuned-abbDet) | -soon- | Fine-tuning on the AlBERT<sub>large-v2</sub> language model | On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/> ### Usage You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
false
# naab: A ready-to-use plug-and-play corpus in Farsi _[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_ ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL) - **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486) - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com) ### Dataset Summary naab is the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use pre-processor that can be employed by those who wanted to make a customized corpus. You can use this corpus by the commands below: ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab") ``` You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)): ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab", split="train[:10%]") ``` **Note: be sure that your machine has at least 130 GB free space, also it may take a while to download. If you are facing disk or internet shortage, you can use below code snippet helping you download your costume sections of the naab:** ```python from datasets import load_dataset # ========================================================== # You should just change this part in order to download your # parts of corpus. indices = { "train": [5, 1, 2], "test": [0, 2] } # ========================================================== N_FILES = { "train": 126, "test": 3 } _BASE_URL = "https://huggingface.co/datasets/SLPL/naab/resolve/main/data/" data_url = { "train": [_BASE_URL + "train-{:05d}-of-{:05d}.txt".format(x, N_FILES["train"]) for x in range(N_FILES["train"])], "test": [_BASE_URL + "test-{:05d}-of-{:05d}.txt".format(x, N_FILES["test"]) for x in range(N_FILES["test"])], } for index in indices['train']: assert index < N_FILES['train'] for index in indices['test']: assert index < N_FILES['test'] data_files = { "train": [data_url['train'][i] for i in indices['train']], "test": [data_url['test'][i] for i in indices['test']] } print(data_files) dataset = load_dataset('text', data_files=data_files, use_auth_token=True) ``` ### Supported Tasks and Leaderboards This corpus can be used for training all language models which can be trained by Masked Language Modeling (MLM) or any other self-supervised objective. - `language-modeling` - `masked-language-modeling` ## Dataset Structure Each row of the dataset will look like something like the below: ```json { 'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.", } ``` + `text` : the textual paragraph. ### Data Splits This dataset includes two splits (`train` and `test`). We split these two by dividing the randomly permuted version of the corpus into (95%, 5%) division respected to (`train`, `test`). Since `validation` is usually occurring during training with the `train` dataset we avoid proposing another split for it. | | train | test | |-------------------------|------:|-----:| | Input Sentences | 225892925 | 11083849 | | Average Sentence Length | 61 | 25 | Below you can see the log-based histogram of word/paragraph over the two splits of the dataset. <div align="center"> <img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-hist.png"> </div> ## Dataset Creation ### Curation Rationale Due to the lack of a huge amount of text data in lower resource languages - like Farsi - researchers working on these languages were always finding it hard to start to fine-tune such models. This phenomenon can lead to a situation in which the golden opportunity for fine-tuning models is just in hands of a few companies or countries which contributes to the weakening the open science. The last biggest cleaned merged textual corpus in Farsi is a 70GB cleaned text corpus from a compilation of 8 big data sets that have been cleaned and can be downloaded directly. Our solution to the discussed issues is called naab. It provides **126GB** (including more than **224 million** sequences and nearly **15 billion** words) as the training corpus and **2.3GB** (including nearly **11 million** sequences and nearly **300 million** words) as the test corpus. ### Source Data The textual corpora that we used as our source data are illustrated in the figure below. It contains 5 corpora which are linked in the coming sections. <div align="center"> <img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-pie.png"> </div> #### Persian NLP [This](https://github.com/persiannlp/persian-raw-text) corpus includes eight corpora that are sorted based on their volume as below: - [Common Crawl](https://commoncrawl.org/): 65GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt)) - [MirasText](https://github.com/miras-tech/MirasText): 12G - [W2C – Web to Corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9): 1GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/w2c_merged.txt)) - Persian Wikipedia (March 2020 dump): 787MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/fawiki_merged.txt)) - [Leipzig Corpora](https://corpora.uni-leipzig.de/): 424M ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/LeipzigCorpus.txt)) - [VOA corpus](https://jon.dehdari.org/corpora/): 66MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/voa_persian_2003_2008_cleaned.txt)) - [Persian poems corpus](https://github.com/amnghd/Persian_poems_corpus): 61MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/poems_merged.txt)) - [TEP: Tehran English-Persian parallel corpus](http://opus.nlpl.eu/TEP.php): 33MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/TEP_fa.txt)) #### AGP This corpus was a formerly private corpus for ASR Gooyesh Pardaz which is now published for all users by this project. This corpus contains more than 140 million paragraphs summed up in 23GB (after cleaning). This corpus is a mixture of both formal and informal paragraphs that are crawled from different websites and/or social media. #### OSCAR-fa [OSCAR](https://oscar-corpus.com/) or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the go classy architecture. Data is distributed by language in both original and deduplicated form. We used the unshuffled-deduplicated-fa from this corpus, after cleaning there were about 36GB remaining. #### Telegram Telegram, a cloud-based instant messaging service, is a widely used application in Iran. Following this hypothesis, we prepared a list of Telegram channels in Farsi covering various topics including sports, daily news, jokes, movies and entertainment, etc. The text data extracted from mentioned channels mainly contains informal data. #### LSCP [The Large Scale Colloquial Persian Language Understanding dataset](https://iasbs.ac.ir/~ansari/lscp/) has 120M sentences from 27M casual Persian sentences with its derivation tree, part-of-speech tags, sentiment polarity, and translations in English, German, Czech, Italian, and Hindi. However, we just used the Farsi part of it and after cleaning we had 2.3GB of it remaining. Since the dataset is casual, it may help our corpus have more informal sentences although its proportion to formal paragraphs is not comparable. #### Initial Data Collection and Normalization The data collection process was separated into two parts. In the first part, we searched for existing corpora. After downloading these corpora we started to crawl data from some social networks. Then thanks to [ASR Gooyesh Pardaz](https://asr-gooyesh.com/en/) we were provided with enough textual data to start the naab journey. We used a preprocessor based on some stream-based Linux kernel commands so that this process can be less time/memory-consuming. The code is provided [here](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess). ### Personal and Sensitive Information Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP. We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful. ## Additional Information ### Dataset Curators + Sadra Sabouri (Sharif University of Technology) + Elnaz Rahmati (Sharif University of Technology) ### Licensing Information mit? ### Citation Information ``` @article{sabouri2022naab, title={naab: A ready-to-use plug-and-play corpus for Farsi}, author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein}, journal={arXiv preprint arXiv:2208.13486}, year={2022} } ``` DOI: [https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486) ### Contributions Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset. ### Keywords + Farsi + Persian + raw text + پیکره فارسی + پیکره متنی + آموزش مدل زبانی
false
# Dataset Card for captioned Gundam Scraped from mahq.net (https://www.mahq.net/mecha/gundam/index.htm) and manually cleaned to only keep drawings and "Mobile Suits" (i.e, humanoid-looking machines). The captions were automatically generated from a generic hardcoded description + the dominant colors as described by [BLIP](https://github.com/salesforce/BLIP).
false
# Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation ## Dataset Description - **Repository:** [https://github.com/AbhilashaRavichander/CondaQA](https://github.com/AbhilashaRavichander/CondaQA) - **Paper:** [https://arxiv.org/abs/2211.00295](https://arxiv.org/abs/2211.00295) - **Point of Contact:** aravicha@andrew.cmu.edu ## Dataset Summary Data from the EMNLP 2022 paper by Ravichander et al.: "CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation". If you use this dataset, we would appreciate you citing our work: ``` @inproceedings{ravichander-et-al-2022-condaqa, title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation}, author={‪Ravichander‬, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana}, proceedings={EMNLP 2022}, year={2022} } ``` From the paper: "We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues." ### Supported Tasks and Leaderboards The task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard. ### Language English ## Dataset Structure ### Data Instances Here's an example instance: ``` {"QuestionID": "q10", "original cue": "rarely", "PassageEditID": 0, "original passage": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws.", "SampleID": 5294, "label": "YES", "original sentence": "Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time.", "sentence2": "If a drug addict is caught with marijuana, is there a chance he will be jailed?", "PassageID": 444, "sentence1": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws." } ``` ### Data Fields * `QuestionID`: unique ID for this question (might be asked for multiple passages) * `original cue`: Negation cue that was used to select this passage from Wikipedia * `PassageEditID`: 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage * `original passage`: Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it) * `SampleID`: unique ID for this passage-question pair * `label`: answer * `original sentence`: Sentence that contains the negated statement * `sentence2`: question * `PassageID`: unique ID for the Wikipedia passage * `sentence1`: passage ### Data Splits Data splits can be accessed as: ``` from datasets import load_dataset train_set = load_dataset("condaqa", "train") dev_set = load_dataset("condaqa", "dev") test_set = load_dataset("condaqa", "test") ``` ## Dataset Creation Full details are in the paper. ### Curation Rationale From the paper: "Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset: 1. The dataset should include a wide variety of negation cues, not just negative particles. 2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019). 3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions. 4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope." ### Source Data From the paper: "To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation." "We use negation cues from [Morante et al. (2011)](https://aclanthology.org/L12-1077/) and [van Son et al. (2016)](https://aclanthology.org/W16-5007/) as a starting point which we extend." #### Initial Data Collection and Normalization We show ten passages to crowdworkers and allow them to choose a passage they would like to work on. #### Who are the source language producers? Original passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers. ### Annotations #### Annotation process From the paper: "In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages." Full details are in the paper. #### Who are the annotators? From the paper: "Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task." We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations. ### Personal and Sensitive Information We expect that such information has already been redacted from Wikipedia. ## Considerations for Using the Data ### Social Impact of Dataset A model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders. ### Discussion of Biases We are not aware of societal biases that are exhibited in this dataset. ### Other Known Limitations From the paper: "Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study." ## Additional Information ### Dataset Curators From the paper: "In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets." The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months. ### Licensing Information license: apache-2.0 ### Citation Information ``` @inproceedings{ravichander-et-al-2022-condaqa, title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation}, author={‪Ravichander‬, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana}, proceedings={EMNLP 2022}, year={2022} } ```
false
true
**CTMatch Classification Dataset** This is a combined set of 2 labelled datasets of: `topic (patient descriptions), doc (clinical trials documents - selected fields), and label ({0, 1, 2})` triples, in jsonl format. (Somewhat of a duplication of some of the `ir_dataset` also available on HF.) These have been processed using ctproc, and in this state can be used by various tokenizers for fine-tuning (see ctmatch for examples). These 2 datasets contain no patient identifying information are openly available in raw forms: #### TREC: http://www.trec-cds.org/2021.html #### CSIRO: https://data.csiro.au/collection/csiro:17152 --- **see repo for more information**: https://github.com/semajyllek/ctmatch
true
https://github.com/SALT-NLP/implicit-hate ``` @inproceedings{elsherief-etal-2021-latent, title = "Latent Hatred: A Benchmark for Understanding Implicit Hate Speech", author = "ElSherief, Mai and Ziems, Caleb and Muchlinski, David and Anupindi, Vaishnavi and Seybolt, Jordyn and De Choudhury, Munmun and Yang, Diyi", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.29", pages = "345--363" } ```
false
# IVA Kotlin GitHub Code Dataset ## Dataset Description This is the curated train split of IVA Kotlin dataset extracted from GitHub. It contains curated Kotlin files gathered with the purpose to train a code generation model. The dataset consists of 383380 Kotlin code files from GitHub. [Here is the unsliced curated dataset](https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean) and [here is the raw dataset](https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint). ### How to use it To download the full dataset: ```python from datasets import load_dataset dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint-clean-train', split='train')) ``` ## Data Structure ### Data Fields |Field|Type|Description| |---|---|---| |repo_name|string|name of the GitHub repository| |path|string|path of the file in GitHub repository| |copies|string|number of occurrences in dataset| |content|string|content of source file| |size|string|size of the source file in bytes| |license|string|license of GitHub repository| |hash|string|Hash of content field.| |line_mean|number|Mean line length of the content. |line_max|number|Max line length of the content. |alpha_frac|number|Fraction between mean and max line length of content. |ratio|number|Character/token ratio of the file with tokenizer. |autogenerated|boolean|True if the content is autogenerated by looking for keywords in the first few lines of the file. |config_or_test|boolean|True if the content is a configuration file or a unit test. |has_no_keywords|boolean|True if a file has none of the keywords for Kotlin Programming Language. |has_few_assignments|boolean|True if file uses symbol '=' less than `minimum` times. ### Instance ```json { "repo_name":"oboenikui/UnivCoopFeliCaReader", "path":"app/src/main/java/com/oboenikui/campusfelica/ScannerActivity.kt", "copies":"1", "size":"5635", "content":"....", "license":"apache-2.0", "hash":"e88cfd99346cbef640fc540aac3bf20b", "line_mean":37.8620689655, "line_max":199, "alpha_frac":0.5724933452, "ratio":5.0222816399, "autogenerated":false, "config_or_test":false, "has_no_keywords":false, "has_few_assignments":false } ``` ## Languages The dataset contains only Kotlin files. ```json { "Kotlin": [".kt"] } ``` ## Licenses Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences. ```json { "agpl-3.0":3209, "apache-2.0":90782, "artistic-2.0":130, "bsd-2-clause":380, "bsd-3-clause":3584, "cc0-1.0":155, "epl-1.0":792, "gpl-2.0":4432, "gpl-3.0":19816, "isc":345, "lgpl-2.1":118, "lgpl-3.0":2689, "mit":31470, "mpl-2.0":1444, "unlicense":654 } ``` ## Dataset Statistics ```json { "Total size": "~207 MB", "Number of files": 160000, "Number of files under 500 bytes": 2957, "Average file size in bytes": 5199, } ``` ## Curation Process See [the unsliced curated dataset](https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean) for mode details. ## Data Splits The dataset only contains a train split focused only on training data. For validation and unspliced versions, please check the following links: * Clean Version Unsliced: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean * Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-valid # Considerations for Using the Data The dataset comprises source code from various repositories, potentially containing harmful or biased code, along with sensitive information such as passwords or usernames.
true
https://github.com/feng-yufei/Neural-Natural-Logic ```bib @inproceedings{feng2020exploring, title={Exploring End-to-End Differentiable Natural Logic Modeling}, author={Feng, Yufei, Ziou Zheng, and Liu, Quan and Greenspan, Michael and Zhu, Xiaodan}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={1172--1185}, year={2020} } ```
true
https://bitbucket.org/RoxanaSz/puzzte/src/master/ ```bib @article{szomiu2021puzzle, title={A Puzzle-Based Dataset for Natural Language Inference}, author={Szomiu, Roxana and Groza, Adrian}, journal={arXiv preprint arXiv:2112.05742}, year={2021} } ```
true
# Dataset Card for "mr" ## Dataset Description Movie review dataset from SentEval. ## Data Fields - `sentence`: Complete sentence expressing an opinion about a film. - `label`: Sentiment of the opinion, either "negative" (0) or positive (1). [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
## Source This repository contains 3 datasets created within the POPP project ([Project for the Oceration of the Paris Population Census](https://popp.hypotheses.org/#ancre2)) for the task of handwriting text recognition. These datasets have been published in [Recognition and information extraction in historical handwritten tables: toward understanding early 20th century Paris census at DAS 2022](https://link.springer.com/chapter/10.1007/978-3-031-06555-2_10). The 3 datasets are called “Generic dataset”, “Belleville”, and “Chaussée d’Antin” and contains lines made from the extracted rows of census tables from 1926. Each table in the Paris census contains 30 rows, thus each page in these datasets corresponds to 30 lines. We publish here only the lines. If you want the pages, go [here](https://zenodo.org/record/6581158). This dataset is made 4800 annotated lines extracted from 80 double pages of the 1926 Paris census. ## Data Info Since the lines are extracted from table rows, we defined 4 special characters to describe the structure of the text: - ¤ : indicates an empty cell - / : indicates the separation into columns - ? : indicates that the content of the cell following this symbol is written above the regular baseline - ! : indicates that the content of the cell following this symbol is written below the regular baseline There are three splits: train, valid and test. ## How to use it ```python from datasets import load_dataset import numpy as np dataset = load_dataset("agomberto/FrenchCensus-handwritten-texts") i = np.random.randint(len(dataset['train'])) img = dataset['train']['image'][i] text = dataset['train']['text'][i] print(text) img ``` ## BibTeX entry and citation info ```bibtex @InProceedings{10.1007/978-3-031-06555-2_10, author="Constum, Thomas and Kempf, Nicolas and Paquet, Thierry and Tranouez, Pierrick and Chatelain, Cl{\'e}ment and Br{\'e}e, Sandra and Merveille, Fran{\c{c}}ois", editor="Uchida, Seiichi and Barney, Elisa and Eglin, V{\'e}ronique", title="Recognition and Information Extraction in Historical Handwritten Tables: Toward Understanding Early {\$}{\$}20^{\{}th{\}}{\$}{\$}Century Paris Census", booktitle="Document Analysis Systems", year="2022", publisher="Springer International Publishing", address="Cham", pages="143--157", abstract="We aim to build a vast database (up to 9 million individuals) from the handwritten tabular nominal census of Paris of 1926, 1931 and 1936, each composed of about 100,000 handwritten simple pages in a tabular format. We created a complete pipeline that goes from the scan of double pages to text prediction while minimizing the need for segmentation labels. We describe how weighted finite state transducers, writer specialization and self-training further improved our results. We also introduce through this communication two annotated datasets for handwriting recognition that are now publicly available, and an open-source toolkit to apply WFST on CTC lattices.", isbn="978-3-031-06555-2" } ```
true
# Dataset Card for MultiRC_TH ### Dataset Description This dataset is Thai translated version of [multirc](https://huggingface.co/datasets/super_glue/viewer/multirc) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
true
https://storage.googleapis.com/ai2-mosaic/public/cycic/CycIC-train-dev.zip https://colab.research.google.com/drive/16nyxZPS7-ZDFwp7tn_q72Jxyv0dzK1MP?usp=sharing ``` @article{Kejriwal2020DoFC, title={Do Fine-tuned Commonsense Language Models Really Generalize?}, author={Mayank Kejriwal and Ke Shen}, journal={ArXiv}, year={2020}, volume={abs/2011.09159} } ``` added for ``` @article{sileo2023tasksource, title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation}, author={Sileo, Damien}, url= {https://arxiv.org/abs/2301.05948}, journal={arXiv preprint arXiv:2301.05948}, year={2023} } ```
false
# Russian StackOverflow dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Description](#description) - [Usage](#usage) - [Data Instances](#data-instances) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Licensing Information](#licensing-information) ## Description **Summary:** Dataset of questions, answers, and comments from [ru.stackoverflow.com](https://ru.stackoverflow.com/). **Script:** [create_stackoverflow.py](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py) **Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu) **Languages:** The dataset is in Russian with some programming code. ## Usage Prerequisites: ```bash pip install datasets zstandard jsonlines pysimdjson ``` Loading: ```python from datasets import load_dataset dataset = load_dataset('IlyaGusev/ru_stackoverflow', split="train") for example in dataset: print(example["text_markdown"]) print() ``` ## Data Instances ``` { "question_id": 11235, "answer_count": 1, "url": "https://ru.stackoverflow.com/questions/11235", "score": 2, "tags": ["c++", "сериализация"], "title": "Извлечение из файла, запись в файл", "views": 1309, "author": "...", "timestamp": 1303205289, "text_html": "...", "text_markdown": "...", "comments": { "text": ["...", "...", "author": ["...", "..."], "comment_id": [11236, 11237], "score": [0, 0], "timestamp": [1303205411, 1303205678] }, "answers": { "answer_id": [11243, 11245], "timestamp": [1303207791, 1303207792], "is_accepted": [1, 0], "text_html": ["...", "..."], "text_markdown": ["...", "..."], "score": [3, 0], "author": ["...", "..."], "comments": { "text": ["...", "..."], "author": ["...", "..."], "comment_id": [11246, 11249], "score": [0, 0], "timestamp": [1303207961, 1303207800] } } } ``` You can use this little helper to unflatten sequences: ```python def revert_flattening(records): fixed_records = [] for key, values in records.items(): if not fixed_records: fixed_records = [{} for _ in range(len(values))] for i, value in enumerate(values): fixed_records[i][key] = value return fixed_records ``` The original JSONL is already unflattened. ## Source Data * The data source is the [Russian StackOverflow](https://ru.stackoverflow.com/) website. * Original XMLs: [ru.stackoverflow.com.7z](https://ia600107.us.archive.org/27/items/stackexchange/ru.stackoverflow.com.7z). * Processing script is [here](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py). ## Personal and Sensitive Information The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible. ## Licensing Information According to the license of original data, this dataset is distributed under [CC BY-SA 2.5](https://creativecommons.org/licenses/by-sa/2.5/).
false
# Dataset Card for "folktables-acs-income" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# Dataset Card for bc2gm_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/spyysalo/bc2gm-corpus/) - **Repository:** [Github](https://github.com/spyysalo/bc2gm-corpus/) - **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2559986/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `id`: Sentence identifier. - `tokens`: Array of tokens composing a sentence. - `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@mahajandiwakar](https://github.com/mahajandiwakar) for adding this dataset.
true
# WikiCAT_ca: Spanish Text Classification dataset ## Dataset Description - **Paper:** - **Point of Contact:** carlos.rodriguez1@bsc.es **Repository** ### Dataset Summary WikiCAT_ca is a Spanish corpus for thematic Text Classification tasks. It is created automatically from Wikipedia and Wikidata sources, and contains 8401 articles from the Viquipedia classified under 12 different categories. This dataset was developed by BSC TeMU as part of the PlanTL project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus. ### Supported Tasks and Leaderboards Text classification, Language Model ### Languages ES- Spanish ## Dataset Structure ### Data Instances Two json files, one for each split. ### Data Fields We used a simple model with the article text and associated labels, without further metadata. #### Example: <pre> {'sentence': 'La economía de Reunión se ha basado tradicionalmente en la agricultura. La caña de azúcar ha sido el cultivo principal durante más de un siglo, y en algunos años representa el 85% de las exportaciones. El gobierno ha estado impulsando el desarrollo de una industria turística para aliviar el alto desempleo, que representa más del 40% de la fuerza laboral.(...) El PIB total de la isla fue de 18.800 millones de dólares EE.UU. en 2007., 'label': 'Economía'} </pre> #### Labels 'Religión', 'Entretenimiento', 'Música', 'Ciencia_y_Tecnología', 'Política', 'Economía', 'Matemáticas', 'Humanidades', 'Deporte', 'Derecho', 'Historia', 'Filosofía' ### Data Splits * hfeval_esv5.json: 1681 label-document pairs * hftrain_esv5.json: 6716 label-document pairs ## Dataset Creation ### Methodology La páginas de "Categoría" representan los temas. para cada tema, extraemos las páginas asociadas a ese primer nivel de la jerarquía, y utilizamos el resúmen ("summary") como texto representativo. ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The source data are thematic categories in the different Wikipedias #### Who are the source language producers? ### Annotations #### Annotation process Automatic annotation #### Who are the annotators? [N/A] ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset We hope this corpus contributes to the development of language models in Spanish. ### Discussion of Biases We are aware that this data might contain biases. We have not applied any steps to reduce their impact. ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es). For further information, send an email to (plantl-gob-es@bsc.es). This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx). ### Licensing Information This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License. Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Contributions [N/A]
true
https://github.com/Aatlantise/syntactic-augmentation-nli/tree/master/datasets ``` @inproceedings{min-etal-2020-syntactic, title = "Syntactic Data Augmentation Increases Robustness to Inference Heuristics", author = "Min, Junghyun and McCoy, R. Thomas and Das, Dipanjan and Pitler, Emily and Linzen, Tal", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.acl-main.212", doi = "10.18653/v1/2020.acl-main.212", pages = "2339--2352", } ```
true
Natural language inference using attempto controlled english Paper to come ``` @inproceedings{fuchs2012first, title={First-order reasoning for attempto controlled english}, author={Fuchs, Norbert E}, booktitle={Controlled Natural Language: Second International Workshop, CNL 2010, Marettimo Island, Italy, September 13-15, 2010. Revised Papers 2}, pages={73--94}, year={2012}, organization={Springer} } ```
true
```bibtex @misc{https://doi.org/10.48550/arxiv.2211.05417, doi = {10.48550/ARXIV.2211.05417}, url = {https://arxiv.org/abs/2211.05417}, author = {Schlegel, Viktor and Pavlov, Kamen V. and Pratt-Hartmann, Ian}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Can Transformers Reason in Fragments of Natural Language?}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
false
# Dataset Card for Dataset Name ## Name ChatGPT Jailbreak Prompts ## Dataset Description - **Autor:** Rubén Darío Jaramillo - **Email:** rubend18@hotmail.com - **WhatsApp:** +593 93 979 6676 ### Dataset Summary ChatGPT Jailbreak Prompts is a complete collection of jailbreak related prompts for ChatGPT. This dataset is intended to provide a valuable resource for understanding and generating text in the context of jailbreaking in ChatGPT. ### Languages [English]
true
0 : not clickbait 1 : clickbait Dataset cleaned from duplicates and kept only the first appearing text. Dataset split into train and test sets using 0.2 split ratio. Dataset split into test and validation sets using 0.2 split ratio. Size of training set: 43.802 Size of test set: 8.760 Size of validation set: 2.191
false
true
```bib @inproceedings{yanaka-etal-2021-exploring, title = "Exploring Transitivity in Neural {NLI} Models through Veridicality", author = "Yanaka, Hitomi and Mineshima, Koji and Inui, Kentaro", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", year = "2021", pages = "920--934", } ```
true
https://github.com/verypluming/HELP ```bib @InProceedings{yanaka-EtAl:2019:starsem, author = {Yanaka, Hitomi and Mineshima, Koji and Bekki, Daisuke and Inui, Kentaro and Sekine, Satoshi and Abzianidze, Lasha and Bos, Johan}, title = {HELP: A Dataset for Identifying Shortcomings of Neural Models in Monotonicity Reasoning}, booktitle = {Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM2019)}, year = {2019}, } ```
true
tomi dataset (theory of mind question answering) recasted as natural language inference https://colab.research.google.com/drive/1J_RqDSw9iPxJSBvCJu-VRbjXnrEjKVvr?usp=sharing ``` @article{sileo2023tasksource, title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation}, author={Sileo, Damien}, url= {https://arxiv.org/abs/2301.05948}, journal={arXiv preprint arXiv:2301.05948}, year={2023} } @inproceedings{le-etal-2019-revisiting, title = "Revisiting the Evaluation of Theory of Mind through Question Answering", author = "Le, Matthew and Boureau, Y-Lan and Nickel, Maximilian", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1598", doi = "10.18653/v1/D19-1598", pages = "5872--5877" } ```
true
```bib @article{kaushik2020learning, title={Learning the Difference that Makes a Difference with Counterfactually Augmented Data}, author={Kaushik, Divyansh and Hovy, Eduard and Lipton, Zachary C}, journal={International Conference on Learning Representations (ICLR)}, year={2020} } ```
false
# Dataset Card for SpeechCommands ## Dataset Description - **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=speech-commands-enriched) - **GitHub** [Spotlight](https://github.com/Renumics/spotlight) - **Dataset Homepage** [tensorflow.org/datasets](https://www.tensorflow.org/datasets/catalog/speech_commands) - **Paper:** [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https://arxiv.org/pdf/1804.03209.pdf) - **Leaderboard:** [More Information Needed] ### Dataset Summary 📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases. At [Renumics](https://renumics.com/?hf-dataset-card=speech-commands-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development. 🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways: 1. Enable new researchers to quickly develop a profound understanding of the dataset. 2. Popularize data-centric AI principles and tooling in the ML community. 3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics. 📚 This dataset is an enriched version of the [SpeechCommands Dataset](https://huggingface.co/datasets/speech_commands). ### Explore the Dataset ![Analyze SpeechCommands with Spotlight](https://spotlight.renumics.com/resources/hf-speech-commands-enriched.png) The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code: Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip): ```python !pip install renumics-spotlight datasets[audio] ``` > **_Notice:_** On Linux, non-Python dependency on libsndfile package must be installed manually. See [Datasets - Installation](https://huggingface.co/docs/datasets/installation#audio) for more information. Load the dataset from huggingface in your notebook: ```python import datasets dataset = datasets.load_dataset("renumics/speech_commands_enriched", "v0.01") ``` [//]: <> (TODO: Update this!) Start exploring with a simple view: ```python from renumics import spotlight df = dataset.to_pandas() df_show = df.drop(columns=['audio']) spotlight.show(df_show, port=8000, dtype={"file": spotlight.Audio}) ``` You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata. ### SpeechCommands Dataset This is a set of one-second .wav audio files, each containing a single spoken English word or background noise. These words are from a small set of commands, and are spoken by a variety of different speakers. This data set is designed to help train simple machine learning models. It is covered in more detail at [https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209). Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains 64,727 audio files. Version 0.02 of the data set (configuration `"v0.02"`) was released on April 11th 2018 and contains 105,829 audio files. ### Supported Tasks and Leaderboards * `keyword-spotting`: the dataset can be used to train and evaluate keyword spotting systems. The task is to detect preregistered keywords by classifying utterances into a predefined set of words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and inference time are all crucial. ### Languages The language data in SpeechCommands is in English (BCP-47 `en`). ## Dataset Structure ### Data Instances Example of a core word (`"label"` is a word, `"is_unknown"` is `False`): ```python { "file": "no/7846fd85_nohash_0.wav", "audio": { "path": "no/7846fd85_nohash_0.wav", "array": array([ -0.00021362, -0.00027466, -0.00036621, ..., 0.00079346, 0.00091553, 0.00079346]), "sampling_rate": 16000 }, "label": 1, # "no" "is_unknown": False, "speaker_id": "7846fd85", "utterance_id": 0 } ``` Example of an auxiliary word (`"label"` is a word, `"is_unknown"` is `True`) ```python { "file": "tree/8b775397_nohash_0.wav", "audio": { "path": "tree/8b775397_nohash_0.wav", "array": array([ -0.00854492, -0.01339722, -0.02026367, ..., 0.00274658, 0.00335693, 0.0005188]), "sampling_rate": 16000 }, "label": 28, # "tree" "is_unknown": True, "speaker_id": "1b88bf70", "utterance_id": 0 } ``` Example of background noise (`_silence_`) class: ```python { "file": "_silence_/doing_the_dishes.wav", "audio": { "path": "_silence_/doing_the_dishes.wav", "array": array([ 0. , 0. , 0. , ..., -0.00592041, -0.00405884, -0.00253296]), "sampling_rate": 16000 }, "label": 30, # "_silence_" "is_unknown": False, "speaker_id": "None", "utterance_id": 0 # doesn't make sense here } ``` ### Data Fields * `file`: relative audio filename inside the original archive. * `audio`: dictionary containing a relative audio filename, a decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audios might take a significant amount of time. Thus, it is important to first query the sample index before the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`. * `label`: either word pronounced in an audio sample or background noise (`_silence_`) class. Note that it's an integer value corresponding to the class name. * `is_unknown`: if a word is auxiliary. Equals to `False` if a word is a core word or `_silence_`, `True` if a word is an auxiliary word. * `speaker_id`: unique id of a speaker. Equals to `None` if label is `_silence_`. * `utterance_id`: incremental id of a word utterance within the same speaker. ### Data Splits The dataset has two versions (= configurations): `"v0.01"` and `"v0.02"`. `"v0.02"` contains more words (see section [Source Data](#source-data) for more details). | | train | validation | test | |----- |------:|-----------:|-----:| | v0.01 | 51093 | 6799 | 3081 | | v0.02 | 84848 | 9982 | 4890 | Note that in train and validation sets examples of `_silence_` class are longer than 1 second. You can use the following code to sample 1-second examples from the longer ones: ```python def sample_noise(example): # Use this function to extract random 1 sec slices of each _silence_ utterance, # e.g. inside `torch.utils.data.Dataset.__getitem__()` from random import randint if example["label"] == "_silence_": random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1) example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]] return example ``` ## Dataset Creation ### Curation Rationale The primary goal of the dataset is to provide a way to build and test small models that can detect a single word from a set of target words and differentiate it from background noise or unrelated speech with as few false positives as possible. ### Source Data #### Initial Data Collection and Normalization The audio files were collected using crowdsourcing, see [aiyprojects.withgoogle.com/open_speech_recording](https://github.com/petewarden/extract_loudest_section) for some of the open source audio collection code that was used. The goal was to gather examples of people speaking single-word commands, rather than conversational sentences, so they were prompted for individual words over the course of a five minute session. In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left", "Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow". In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual". In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left", "Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words from unrecognized ones. The `_silence_` label contains a set of longer audio clips that are either recordings or a mathematical simulation of noise. #### Who are the source language producers? The audio files were collected using crowdsourcing. ### Annotations #### Annotation process Labels are the list of words prepared in advances. Speakers were prompted for individual words over the course of a five minute session. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Creative Commons BY 4.0 License ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/legalcode]). ### Citation Information ``` @article{speechcommandsv2, author = { {Warden}, P.}, title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}", journal = {ArXiv e-prints}, archivePrefix = "arXiv", eprint = {1804.03209}, primaryClass = "cs.CL", keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction}, year = 2018, month = apr, url = {https://arxiv.org/abs/1804.03209}, } ``` ### Contributions [More Information Needed]
false
# Dataset Card for ChemSum ## ChemSum Description <!---- **Homepage:** - **Leaderboard:** -----> - **Paper:** [What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization ](https://arxiv.org/abs/2305.07615) - **Journal:** ACL 2023 - **Point of Contact:** griffin.adams@columbia.edu - **Repository:** https://github.com/griff4692/calibrating-summaries ### ChemSum Summary We introduce a dataset with a pure chemistry focus by compiling a list of chemistry academic journals with Open-Access articles. For each journal, we downloaded full-text article PDFs from the Open-Access portion of the journal using available APIs, or scraping this content using [Selenium Chrome WebDriver](https://www.selenium.dev/documentation/webdriver/). Each PDF was processed with Grobid via a locally installed [client](https://pypi.org/project/grobid-client-python/) to extract free-text paragraphs with sections. The table below shows the journals from which Open Access articles were sourced, as well as the number of papers processed. For all journals, we filtered for papers with the provided topic of Chemistry when papers from other disciplines were also available (e.g. PubMed). | Source | # of Articles | | ----------- | ----------- | | Beilstein | 1,829 | | Chem Cell | 546 | | ChemRxiv | 12,231 | | Chemistry Open | 398 | | Nature Communications Chemistry | 572 | | PubMed Author Manuscript | 57,680 | | PubMed Open Access | 29,540 | | Royal Society of Chemistry (RSC) | 9,334 | | Scientific Reports - Nature | 6,826 | <!--- ### Supported Tasks and Leaderboards [More Information Needed] ---> ### Languages English ## Dataset Structure <!--- ### Data Instances ---> ### Data Fields | Column | Description | | ----------- | ----------- | | `uuid` | Unique Identifier for the Example | | `title` | Title of the Article | | `article_source` | Open Source Journal (see above for list) | | `abstract` | Abstract (summary reference) | | `sections` | Full-text sections from the main body of paper (<!> indicates section boundaries)| | `headers` | Corresponding section headers for `sections` field (<!> delimited) | | `source_toks` | Aggregate number of tokens across `sections` | | `target_toks` | Number of tokens in the `abstract` | | `compression` | Ratio of `source_toks` to `target_toks` | Please refer to `load_chemistry()` in https://github.com/griff4692/calibrating-summaries/blob/master/preprocess/preprocess.py for pre-processing as a summarization dataset. The inputs are `sections` and `headers` and the targets is the `abstract`. ### Data Splits | Split | Count | | ----------- | ----------- | | `train` | 115,956 | | `validation` | 1,000 | | `test` | 2,000 | ### Citation Information ``` @article{adams2023desired, title={What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization}, author={Adams, Griffin and Nguyen, Bichlien H and Smith, Jake and Xia, Yingce and Xie, Shufang and Ostropolets, Anna and Deb, Budhaditya and Chen, Yuan-Jyue and Naumann, Tristan and Elhadad, No{\'e}mie}, journal={arXiv preprint arXiv:2305.07615}, year={2023} } ``` <!--- ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Contributions [More Information Needed] --->
false
# Dataset Card for Huatuo_encyclopedia_qa ## Dataset Description - **Homepage: https://www.huatuogpt.cn/** - **Repository: https://github.com/FreedomIntelligence/HuatuoGPT** - **Paper: https://arxiv.org/abs/2305.01526** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset has a total of 364,420 pieces of medical QA data, some of which have multiple questions in different ways. We extract medical QA pairs from plain texts (e.g., medical encyclopedias and medical articles). We collected 8,699 encyclopedia entries for diseases and 2,736 encyclopedia entries for medicines on Chinese Wikipedia. Moreover, we crawled 226,432 high-quality medical articles from the Qianwen Health website. ## Dataset Creation ### Source Data https://zh.wikipedia.org/wiki/ https://51zyzy.com/ ## Citation ``` @misc{li2023huatuo26m, title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset}, author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang}, year={2023}, eprint={2305.01526}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
false
false
# Dataset Card for Chinese Musical Instruments Timbre Evaluation Database ## Dataset Description - **Homepage:** <https://ccmusic-database.github.io> - **Repository:** <https://huggingface.co/datasets/ccmusic-database/CMITE> - **Paper:** <https://doi.org/10.5281/zenodo.5676893> - **Leaderboard:** <https://ccmusic-database.github.io/team.html> - **Point of Contact:** N/A ### Dataset Summary This database contains subjective timbre evaluation scores of 16 subjective timbre evaluation terms (such as bright, dark, raspy) on 37 Chinese national terms given by 14 participants in a subjective evaluation experiment. ### Supported Tasks and Leaderboards Chinese Musical Instruments Timbre Evaluation ### Languages Chinese, English ## Dataset Structure ### Data Instances .wav, .csv ### Data Fields Traditional Chinese instruments ### Data Splits trainset ## Dataset Creation ### Curation Rationale Lack of a dataset for Chinese musical instruments timbre evaluation ### Source Data #### Initial Data Collection and Normalization Zhaorui Liu, Monan Zhou #### Who are the source language producers? Students from CCMUSIC ### Annotations #### Annotation process Subjective timbre evaluation scores of 16 subjective timbre evaluation terms (such as bright, dark, raspy) on 37 Chinese national terms given by 14 participants in a subjective evaluation experiment #### Who are the annotators? Students from CCMUSIC ### Personal and Sensitive Information None ## Considerations for Using the Data ### Social Impact of Dataset Promoting the development of AI in the music industry ### Discussion of Biases Only for Chinese traditional instruments ### Other Known Limitations Less data ## Additional Information ### Dataset Curators Zijin Li ### Licensing Information ``` MIT License Copyright (c) 2023 CCMUSIC Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### Citation Information ``` @dataset{zhaorui_liu_2021_5676893, author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li}, title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}}, month = nov, year = 2021, publisher = {Zenodo}, version = {1.1}, doi = {10.5281/zenodo.5676893}, url = {https://doi.org/10.5281/zenodo.5676893} } ``` ### Contributions Provide a dataset for Chinese musical instruments timbre evaluation
false
# IVA Kotlin GitHub Code Dataset - Curated - Validation ## Dataset Description This is the curated valid split of IVA Kotlin dataset extracted from GitHub. It contains curated Kotlin files gathered with the purpose to train & validate a code generation model. The dataset only contains a valid split. For validation and unspliced versions, please check the following links: * Clean Version Unsliced: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean * Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-train Information about dataset structure, data involved, licenses and standard Dataset Card information is available that applies to this dataset also. # Considerations for Using the Data The dataset comprises source code from various repositories, potentially containing harmful or biased code, along with sensitive information such as passwords or usernames.
false
# Dataset Card for CORE Deduplication ## Dataset Description - **Homepage:** [https://core.ac.uk/about/research-outputs](https://core.ac.uk/about/research-outputs) - **Repository:** [https://core.ac.uk/datasets/core_2020-05-10_deduplication.zip](https://core.ac.uk/datasets/core_2020-05-10_deduplication.zip) - **Paper:** [Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings](http://oro.open.ac.uk/id/eprint/70519) - **Point of Contact:** [CORE Team](https://core.ac.uk/about#contact) - **Size of downloaded dataset files:** 204 MB ### Dataset Summary CORE 2020 Deduplication dataset (https://core.ac.uk/documentation/dataset) contains 100K scholarly documents labeled as duplicates/non-duplicates. ### Languages The dataset language is English (BCP-47 `en`) ### Citation Information ``` @inproceedings{dedup2020, title={Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings}, author={Gyawali, Bikash and Anastasiou, Lucas and Knoth, Petr}, booktitle = {Proceedings of 12th Language Resources and Evaluation Conference}, month = may, year = 2020, publisher = {France European Language Resources Association}, pages = {894-903} } ```
false
Dataset containing video metadata from a few tech channels, i.e. * [James Briggs](https://youtube.com/c/JamesBriggs) * [Yannic Kilcher](https://www.youtube.com/c/YannicKilcher) * [sentdex](https://www.youtube.com/c/sentdex) * [Daniel Bourke](https://www.youtube.com/channel/UCr8O8l5cCX85Oem1d18EezQ) * [AI Coffee Break with Letitia](https://www.youtube.com/c/AICoffeeBreak) * [Alex Ziskind](https://youtube.com/channel/UCajiMK_CY9icRhLepS8_3ug)
false
# Malicious Smart Contract Classification Dataset This dataset includes malicious and benign smart contracts deployed on Ethereum. Code used to collect this data: [data collection notebook](https://github.com/forta-network/starter-kits/blob/main/malicious-smart-contract-ml-py/data_collection.ipynb) For more details on how this dataset can be used, please check out this blog: [How Forta’s Predictive ML Models Detect Attacks Before Exploitation](https://forta.org/blog/how-fortas-predictive-ml-models-detect-attacks-before-exploitation/)
true
```bib @article{kaushik2020learning, title={Learning the Difference that Makes a Difference with Counterfactually Augmented Data}, author={Kaushik, Divyansh and Hovy, Eduard and Lipton, Zachary C}, journal={International Conference on Learning Representations (ICLR)}, year={2020} } ```
true
### Suomi-24-toxicity-annotated This dataset includes comments from Suomi24 sampled using predictions from a toxicity classifier. The comments were taken in intervals for each label. The process of sampling emphasized difficult borderline cases. 500 comments were sampled for each label. The annotation process used the labels from Perspective, used e.g. for `TurkuNLP/wikipedia-toxicity-data-fi`. Instead of multi-label, we annotated each comment only for one label, although a couple comments appear in two labels. Process of annotation included initial annotation of 100-200 comments followed by a discussion and final annotations. Raw data can be found from [here](https://github.com/TurkuNLP/toxicity-classifier/tree/main/annotations/raw_annotations). Examples that made it to the dataset are ones that had unanimous agreement or were resolved through discussion. ### Citing To cite this dataset use the following bibtex. ``` @inproceedings{eskelinen-etal-2023-toxicity, title = "Toxicity Detection in {F}innish Using Machine Translation", author = "Eskelinen, Anni and Silvala, Laura and Ginter, Filip and Pyysalo, Sampo and Laippala, Veronika", booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)", month = may, year = "2023", address = "T{\'o}rshavn, Faroe Islands", publisher = "University of Tartu Library", url = "https://aclanthology.org/2023.nodalida-1.68", pages = "685--697", abstract = "Due to the popularity of social media platforms and the sheer amount of user-generated content online, the automatic detection of toxic language has become crucial in the creation of a friendly and safe digital space. Previous work has been mostly focusing on English leaving many lower-resource languages behind. In this paper, we present novel resources for toxicity detection in Finnish by introducing two new datasets, a machine translated toxicity dataset for Finnish based on the widely used English Jigsaw dataset and a smaller test set of Suomi24 discussion forum comments originally written in Finnish and manually annotated following the definitions of the labels that were used to annotate the Jigsaw dataset. We show that machine translating the training data to Finnish provides better toxicity detection results than using the original English training data and zero-shot cross-lingual transfer with XLM-R, even with our newly annotated dataset from Suomi24.", } ``` ## Label definitions taken from Perspective API THREAT: Describes an intention to inflict pain, injury, or violence against an individual or group. THREATENING: Language that is threatening or encouraging violence or harm, including self-harm. PROFANITY: Swear words, curse words, or other obscene or profane language. INSULT: Insulting, inflammatory, or negative comment towards a person or a group of people. Such comments are not necessarily identity specific. IDENTITY ATTACK: Negative or hateful comments targeting someone because of their identity. TOXICITY: A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion. SEVERE TOXICITY: A very hateful, aggressive, disrespectful comment or otherwise very likely to make a user leave a discussion or give up on sharing their perspective. This attribute is much less sensitive to more mild forms of toxicity, such as comments that include positive uses of curse words. ## Guidelines used for annotation: ### Obscene swearwords, including mild expletives and misspelled, masked, or other variations sexually explicit words/terminology that are not topically or contextually appropriate ### Threat suicidal or self-harm comments, incitement to violence or self-harm, hypothetical situations and wishing harm to somebody comments that are very unlikely to happen if not marked clearly as sarcasm only threats towards people are annotated as threat threats made by somebody else other than the writer NOT included counterfactuals statements NOT included <!--- as in "if I was there I would have..." ---> ### Insult terms that are insulting towards groups of people (also in identity attack) insults against political groups, e.g. "vitun demari/suvakki/persu" -> "fucking liberal/conservative etc." <!--- I made this decision here.. ---> negative insulting comments towards oneself, things other than people and hypothetical situations NOT included <!--- PROBLEM: use of racist or rapist if true, target not clear ---> ### Identity attack comments that have no negative language but are still clearly negative negative statements towards political groups or groups that nobody self-identifies with are NOT included (unless an insult) ### Toxicity unreasonably expressed negative comments regardless of the target present and whether the target is known or not mild or humoristic swearwords are NOT included positive or neutral sexually explicit comments are NOT included ### Severe toxicity comments that include only sexually explicit content only one severely toxic element is needed to have this label and a comment is severely toxic even if the comment contains substantive content target does not need to be present nor does the target matter ## Inter-annotator agreement: | Label | Initial (unanimous) | After discussion (unanimous) | Initial (at least 2/3) | After discussion (at least 2/3) | |------ | ------------------- | ---------------------------- | ---------------------- | ------------------------------- | | identity attack | 54,5 % | 66,6 % | 92 % | 93,6 % | | insult | 47,5 % | 49,6 % | 94,5 % | 95,6 % | | severe toxicity | 63 % | 66 % | 92 % | 96,6 % | | threat | 82 % | 80,3 % | 98 % | 97,3 % | | toxicity | 58 % | 54 % | 93 % | 89,6 % | | obscene | 69 % | 62 % | 97 % | 96 % | ## Evaluation results Evaluation results from using `TurkuNLP/bert-large-finnish-cased-toxicity`. | Label | Precision | Recall | F1 | |------ | ------------------- | ---------------------------- | ---------------------- | | identity attack | 73,2 | 32 | 44,6 | | insult | 59,4 | 646,8 | 52,4 | | severe toxicity | 12 | 28,6 | 16,9 | | threat | 32,4 | 28,6 | 30,4 | | toxicity | 60,4 | 79,2 | 68,5 | | obscene | 64,5 | 82,4 | 72,3 | | OVERALL | 57,4 | 58,9 | 51,1 | | OVERALL weighted by original sample counts | 55,5 | 65,5 | 60,1 | ## Licensing Information Contents of this repository are distributed under the [Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
true
BoolQ questions with semantic alteration and human verifications ```bib @article{khashabi2020naturalperturbations, title={Natural Perturbation for Robust Question Answering}, author={D. Khashabi and T. Khot and A. Sabhwaral}, journal={arXiv preprint}, year={2020} } ```
false
# Electricity The [Electricity dataset](https://www.openml.org/search?type=data&sort=runs&id=151&status=active) from the [OpenML repository](https://www.openml.org/). # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|-------------------------| | electricity | Binary classification | Has the electricity cost gone up?| # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/electricity", "electricity")["train"] ```