id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
FreedomIntelligence/alpaca-gpt4-korean
2023-08-06T08:10:43.000Z
[ "region:us" ]
FreedomIntelligence
null
null
null
1
653
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT).
masakhaner
2023-06-01T14:59:56.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:am", "language:ha", "language:ig", "language:lg", "language:luo", "language:pcm", "language:rw", "language:sw", "language:wo", "language:yo", "license:unknown", "arxiv:2103.11811", "region:us" ]
null
MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages. Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] . MasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages: - Amharic - Hausa - Igbo - Kinyarwanda - Luganda - Luo - Nigerian-Pidgin - Swahili - Wolof - Yoruba The train/validation/test sets are available for all the ten languages. For more details see https://arxiv.org/abs/2103.11811
@article{Adelani2021MasakhaNERNE, title={MasakhaNER: Named Entity Recognition for African Languages}, author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei}, journal={ArXiv}, year={2021}, volume={abs/2103.11811} }
null
4
651
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - am - ha - ig - lg - luo - pcm - rw - sw - wo - yo license: - unknown multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: MasakhaNER dataset_info: - config_name: amh features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 639911 num_examples: 1750 - name: validation num_bytes: 92753 num_examples: 250 - name: test num_bytes: 184271 num_examples: 500 download_size: 571951 dataset_size: 916935 - config_name: hau features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 929848 num_examples: 1912 - name: validation num_bytes: 139503 num_examples: 276 - name: test num_bytes: 282971 num_examples: 552 download_size: 633372 dataset_size: 1352322 - config_name: ibo features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 749196 num_examples: 2235 - name: validation num_bytes: 110572 num_examples: 320 - name: test num_bytes: 222192 num_examples: 638 download_size: 515415 dataset_size: 1081960 - config_name: kin features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 878746 num_examples: 2116 - name: validation num_bytes: 120998 num_examples: 302 - name: test num_bytes: 258638 num_examples: 605 download_size: 633024 dataset_size: 1258382 - config_name: lug features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 611917 num_examples: 1428 - name: validation num_bytes: 70058 num_examples: 200 - name: test num_bytes: 183063 num_examples: 407 download_size: 445755 dataset_size: 865038 - config_name: luo features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 314995 num_examples: 644 - name: validation num_bytes: 43506 num_examples: 92 - name: test num_bytes: 87716 num_examples: 186 download_size: 213281 dataset_size: 446217 - config_name: pcm features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 868229 num_examples: 2124 - name: validation num_bytes: 126829 num_examples: 306 - name: test num_bytes: 262185 num_examples: 600 download_size: 572054 dataset_size: 1257243 - config_name: swa features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 1001120 num_examples: 2109 - name: validation num_bytes: 128563 num_examples: 300 - name: test num_bytes: 272108 num_examples: 604 download_size: 686313 dataset_size: 1401791 - config_name: wol features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 602076 num_examples: 1871 - name: validation num_bytes: 71535 num_examples: 267 - name: test num_bytes: 191484 num_examples: 539 download_size: 364463 dataset_size: 865095 - config_name: yor features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-DATE '8': I-DATE splits: - name: train num_bytes: 1016741 num_examples: 2171 - name: validation num_bytes: 127415 num_examples: 305 - name: test num_bytes: 359519 num_examples: 645 download_size: 751510 dataset_size: 1503675 config_names: - am - ha - ig - lg - luo - pcm - rw - sw - wo - yo --- # Dataset Card for MasakhaNER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepage](https://github.com/masakhane-io/masakhane-ner) - **Repository:** [github](https://github.com/masakhane-io/masakhane-ner) - **Paper:** [paper](https://arxiv.org/abs/2103.11811) - **Point of Contact:** [Masakhane](https://www.masakhane.io/) or didelani@lsv.uni-saarland.de ### Dataset Summary MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages. Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] . MasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages: - Amharic - Hausa - Igbo - Kinyarwanda - Luganda - Luo - Nigerian-Pidgin - Swahili - Wolof - Yoruba The train/validation/test sets are available for all the ten languages. For more details see https://arxiv.org/abs/2103.11811 ### Supported Tasks and Leaderboards [More Information Needed] - `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data. ### Languages There are ten languages available : - Amharic (amh) - Hausa (hau) - Igbo (ibo) - Kinyarwanda (kin) - Luganda (kin) - Luo (luo) - Nigerian-Pidgin (pcm) - Swahili (swa) - Wolof (wol) - Yoruba (yor) ## Dataset Structure ### Data Instances The examples look like this for Yorùbá: ``` from datasets import load_dataset data = load_dataset('masakhaner', 'yor') # Please, specify the language code # A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [B-DATE, I-DATE, 0, 0, 0, 0, 0, B-PER, I-PER, I-PER, O, O, O, O], 'tokens': ['Wákàtí', 'méje', 'ti', 'ré', 'kọjá', 'lọ', 'tí', 'Luis', 'Carlos', 'Díaz', 'ti', 'di', 'awati', '.'] } ``` ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE", ``` In the NER tags, a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & time (DATE). It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked. ### Data Splits For all languages, there are three splits. The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits. The splits have the following sizes : | Language | train | validation | test | |-----------------|------:|-----------:|-----:| | Amharic | 1750 | 250 | 500 | | Hausa | 1903 | 272 | 545 | | Igbo | 2233 | 319 | 638 | | Kinyarwanda | 2110 | 301 | 604 | | Luganda | 2003 | 200 | 401 | | Luo | 644 | 92 | 185 | | Nigerian-Pidgin | 2100 | 300 | 600 | | Swahili | 2104 | 300 | 602 | | Wolof | 1871 | 267 | 536 | | Yoruba | 2124 | 303 | 608 | ## Dataset Creation ### Curation Rationale The dataset was introduced to introduce new resources to ten languages that were under-served for natural language processing. [More Information Needed] ### Source Data The source of the data is from the news domain, details can be found here https://arxiv.org/abs/2103.11811 #### Initial Data Collection and Normalization The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable. #### Who are the source language producers? The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above. ### Annotations #### Annotation process Details can be found here https://arxiv.org/abs/2103.11811 #### Who are the annotators? Annotators were recruited from [Masakhane](https://www.masakhane.io/) ### Personal and Sensitive Information The data is sourced from newspaper source and only contains mentions of public figures or individuals ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains. ## Additional Information ### Dataset Curators ### Licensing Information The licensing status of the data is CC 4.0 Non-Commercial ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @article{Adelani2021MasakhaNERNE, title={MasakhaNER: Named Entity Recognition for African Languages}, author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei}, journal={ArXiv}, year={2021}, volume={abs/2103.11811} } ``` ### Contributions Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
regisss/librispeech_asr_for_optimum_habana_ci
2023-09-10T19:40:47.000Z
[ "license:cc-by-4.0", "region:us" ]
regisss
LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.87
@inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} }
null
0
645
--- license: cc-by-4.0 --- This dataset contains the splits `clean.train.100` and `clean.dev` of the [LibriSpeech dataset](https://huggingface.co/datasets/librispeech_asr). It is only meant to be used in Optimum Habana's CI to avoid downloading other splits.
result-kand2-sdxl-wuerst-karlo/36e1d427
2023-09-19T14:17:01.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
645
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 232 num_examples: 10 download_size: 1385 dataset_size: 232 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "36e1d427" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
transformersbook/codeparrot-train
2022-02-05T16:23:03.000Z
[ "region:us" ]
transformersbook
null
null
null
3
644
# CodeParrot Dataset This is the train split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb). See the [full dataset](https://huggingface.co/datasets/transformersbook/codeparrot) for more information.
KATANABRAVE/stories
2023-08-25T06:37:13.000Z
[ "license:llama2", "region:us" ]
KATANABRAVE
null
null
null
0
643
--- license: llama2 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: title dtype: string - name: article dtype: string - name: text dtype: string - name: input_ids sequence: int32 - name: attention_mask sequence: int8 - name: labels sequence: int64 splits: - name: train num_bytes: 110879624 num_examples: 8500 - name: validation num_bytes: 3383807 num_examples: 277 download_size: 48437278 dataset_size: 114263431 ---
cuad
2022-11-18T19:50:02.000Z
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2103.06268", "region:us" ]
null
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions.
@article{hendrycks2021cuad, title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball}, journal={arXiv preprint arXiv:2103.06268}, year={2021} }
null
28
638
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - closed-domain-qa - extractive-qa paperswithcode_id: cuad pretty_name: CUAD train-eval-index: - config: default task: question-answering task_id: extractive_question_answering splits: train_split: train eval_split: test col_mapping: question: question context: context answers: text: text answer_start: answer_start metrics: - type: cuad name: CUAD dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: train num_bytes: 1466037640 num_examples: 22450 - name: test num_bytes: 198543467 num_examples: 4182 download_size: 18309308 dataset_size: 1664581107 --- # Dataset Card for CUAD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad) - **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/) - **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268) - **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org) ### Dataset Summary Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset contains samples in English only. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [44], "text": ['DISTRIBUTOR AGREEMENT'] }, "context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...', "id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0", "question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract", "title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT" } ``` ### Data Fields - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits This dataset is split into train/test set. Number of samples in each set is given below: | | Train | Test | | ----- | ------ | ---- | | CUAD | 22450 | 4182 | ## Dataset Creation ### Curation Rationale A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring. Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies. To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack. ### Source Data #### Initial Data Collection and Normalization The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet. Type of Contracts: # of Docs Affiliate Agreement: 10 Agency Agreement: 13 Collaboration/Cooperation Agreement: 26 Co-Branding Agreement: 22 Consulting Agreement: 11 Development Agreement: 29 Distributor Agreement: 32 Endorsement Agreement: 24 Franchise Agreement: 15 Hosting Agreement: 20 IP Agreement: 17 Joint Venture Agreemen: 23 License Agreement: 33 Maintenance Agreement: 34 Manufacturing Agreement: 17 Marketing Agreement: 17 Non-Compete/No-Solicit/Non-Disparagement Agreement: 3 Outsourcing Agreement: 18 Promotion Agreement: 12 Reseller Agreement: 12 Service Agreement: 28 Sponsorship Agreement: 31 Supply Agreement: 18 Strategic Alliance Agreement: 32 Transportation Agreement: 13 TOTAL: 510 #### Who are the source language producers? The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD. ### Annotations #### Annotation process The labeling process included multiple steps to ensure accuracy: 1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours. 2. Law Student Label: law students conducted manual contract review and labeling in eBrevia. 3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step. 4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled. 5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly. 6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels. 7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer. #### Who are the annotators? Answered in above section. ### Personal and Sensitive Information Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”). For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”. For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”. Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows: THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION. Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category. To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.” Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Attorney Advisors Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu Law Student Leaders John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran Law Student Contributors Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin Technical Advisors & Contributors Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen ### Licensing Information CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use. The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR. Privacy Policy & Disclaimers The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved. The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer. ### Citation Information ``` @article{hendrycks2021cuad, title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball}, journal={arXiv preprint arXiv:2103.06268}, year={2021} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
argilla/agnews_weak_labeling
2023-07-13T11:46:28.000Z
[ "language:en", "region:us" ]
argilla
null
null
null
0
638
--- language: en dataset_info: features: - name: text dtype: string - name: inputs struct: - name: text dtype: string - name: prediction dtype: 'null' - name: prediction_agent dtype: 'null' - name: annotation dtype: string - name: annotation_agent dtype: 'null' - name: multi_label dtype: bool - name: explanation dtype: 'null' - name: id dtype: 'null' - name: metadata struct: - name: split dtype: string - name: status dtype: string - name: event_timestamp dtype: 'null' - name: metrics dtype: 'null' - name: vectors struct: - name: mini-lm-sentence-transformers sequence: float64 splits: - name: train num_bytes: 25212139 num_examples: 7000 download_size: 20872343 dataset_size: 25212139 --- # Dataset Card for "agnews_weak_labeling" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
conceptofmind/flan2021_submix_original
2023-05-09T23:31:13.000Z
[ "region:us" ]
conceptofmind
null
null
null
33
638
--- dataset_info: features: - name: inputs dtype: string - name: targets dtype: string - name: task_source dtype: string - name: task_name dtype: string - name: template_type dtype: string splits: - name: train num_bytes: 8988026240 num_examples: 5362361 download_size: 5486308797 dataset_size: 8988026240 --- # Dataset Card for "flan2021_submix_original" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
polinaeterna/amazon_us_reviews
2023-06-09T17:56:17.000Z
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:text-scoring", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:topic-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:original", "language:en", "license:other", "region:us" ]
polinaeterna
Amazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the Amazon.com website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews. Over 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters). Each Dataset contains the following columns: - marketplace: 2 letter country code of the marketplace where the review was written. - customer_id: Random identifier that can be used to aggregate reviews written by a single author. - review_id: The unique ID of the review. - product_id: The unique Product ID the review pertains to. In the multilingual dataset the reviews for the same product in different countries can be grouped by the same product_id. - product_parent: Random identifier that can be used to aggregate reviews for the same product. - product_title: Title of the product. - product_category: Broad product category that can be used to group reviews (also used to group the dataset into coherent parts). - star_rating: The 1-5 star rating of the review. - helpful_votes: Number of helpful votes. - total_votes: Number of total votes the review received. - vine: Review was written as part of the Vine program. - verified_purchase: The review is on a verified purchase. - review_headline: The title of the review. - review_body: The review text. - review_date: The date the review was written.
\
null
0
636
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - other multilinguality: - monolingual size_categories: - 100M<n<1B source_datasets: - original task_categories: - summarization - text-generation - fill-mask - text-classification task_ids: - text-scoring - language-modeling - masked-language-modeling - sentiment-classification - sentiment-scoring - topic-classification pretty_name: Amazon US Reviews dataset_info: - config_name: Books_v1_01 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 6997552259 num_examples: 6106719 download_size: 2692708591 dataset_size: 6997552259 - config_name: Watches_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 458976082 num_examples: 960872 download_size: 162973819 dataset_size: 458976082 - config_name: Personal_Care_Appliances_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 49036547 num_examples: 85981 download_size: 17634794 dataset_size: 49036547 - config_name: Mobile_Electronics_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 63293377 num_examples: 104975 download_size: 22870508 dataset_size: 63293377 - config_name: Digital_Video_Games_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 80176851 num_examples: 145431 download_size: 27442648 dataset_size: 80176851 - config_name: Digital_Software_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 58782931 num_examples: 102084 download_size: 18997559 dataset_size: 58782931 - config_name: Major_Appliances_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 67642424 num_examples: 96901 download_size: 24359816 dataset_size: 67642424 - config_name: Gift_Card_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 47188062 num_examples: 149086 download_size: 12134676 dataset_size: 47188062 - config_name: Video_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 356264426 num_examples: 380604 download_size: 138929896 dataset_size: 356264426 - config_name: Luggage_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 167354173 num_examples: 348657 download_size: 60320191 dataset_size: 167354173 - config_name: Software_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 266020595 num_examples: 341931 download_size: 94010685 dataset_size: 266020595 - config_name: Video_Games_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1291054668 num_examples: 1785997 download_size: 475199894 dataset_size: 1291054668 - config_name: Furniture_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 405212374 num_examples: 792113 download_size: 148982796 dataset_size: 405212374 - config_name: Musical_Instruments_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 518908568 num_examples: 904765 download_size: 193389086 dataset_size: 518908568 - config_name: Digital_Music_Purchase_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 710546079 num_examples: 1688884 download_size: 253570168 dataset_size: 710546079 - config_name: Books_v1_02 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 3387034903 num_examples: 3105520 download_size: 1329539135 dataset_size: 3387034903 - config_name: Home_Entertainment_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 534333848 num_examples: 705889 download_size: 193168458 dataset_size: 534333848 - config_name: Grocery_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1072289473 num_examples: 2402458 download_size: 401337166 dataset_size: 1072289473 - config_name: Outdoors_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1172986088 num_examples: 2302401 download_size: 448963100 dataset_size: 1172986088 - config_name: Pet_Products_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1355659812 num_examples: 2643619 download_size: 515815253 dataset_size: 1355659812 - config_name: Video_DVD_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 3953234561 num_examples: 5069140 download_size: 1512355451 dataset_size: 3953234561 - config_name: Apparel_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 2256558450 num_examples: 5906333 download_size: 648641286 dataset_size: 2256558450 - config_name: PC_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 3982684438 num_examples: 6908554 download_size: 1512903923 dataset_size: 3982684438 - config_name: Tools_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 872273119 num_examples: 1741100 download_size: 333782939 dataset_size: 872273119 - config_name: Jewelry_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 703275869 num_examples: 1767753 download_size: 247022254 dataset_size: 703275869 - config_name: Baby_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 956952590 num_examples: 1752932 download_size: 357392893 dataset_size: 956952590 - config_name: Home_Improvement_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1329688315 num_examples: 2634781 download_size: 503339178 dataset_size: 1329688315 - config_name: Camera_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1187101912 num_examples: 1801974 download_size: 442653086 dataset_size: 1187101912 - config_name: Lawn_and_Garden_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1272255987 num_examples: 2557288 download_size: 486772662 dataset_size: 1272255987 - config_name: Office_Products_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1370685534 num_examples: 2642434 download_size: 512323500 dataset_size: 1370685534 - config_name: Electronics_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1875406721 num_examples: 3093869 download_size: 698828243 dataset_size: 1875406721 - config_name: Automotive_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1520191087 num_examples: 3514942 download_size: 582145299 dataset_size: 1520191087 - config_name: Digital_Video_Download_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1484214187 num_examples: 4057147 download_size: 506979922 dataset_size: 1484214187 - config_name: Mobile_Apps_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1627857158 num_examples: 5033376 download_size: 557959415 dataset_size: 1627857158 - config_name: Shoes_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 1781283508 num_examples: 4366916 download_size: 642255314 dataset_size: 1781283508 - config_name: Toys_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 2197820069 num_examples: 4864249 download_size: 838451398 dataset_size: 2197820069 - config_name: Sports_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 2241349145 num_examples: 4850360 download_size: 872478735 dataset_size: 2241349145 - config_name: Kitchen_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 2453735305 num_examples: 4880466 download_size: 930744854 dataset_size: 2453735305 - config_name: Beauty_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 2399292506 num_examples: 5115666 download_size: 914070021 dataset_size: 2399292506 - config_name: Music_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 3900138839 num_examples: 4751577 download_size: 1521994296 dataset_size: 3900138839 - config_name: Health_Personal_Care_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 2679427491 num_examples: 5331449 download_size: 1011180212 dataset_size: 2679427491 - config_name: Digital_Ebook_Purchase_v1_01 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 3470453859 num_examples: 5101693 download_size: 1294879074 dataset_size: 3470453859 - config_name: Home_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 2796680249 num_examples: 6221559 download_size: 1081002012 dataset_size: 2796680249 - config_name: Wireless_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 4633213433 num_examples: 9002021 download_size: 1704713674 dataset_size: 4633213433 - config_name: Books_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 7197687124 num_examples: 10319090 download_size: 2740337188 dataset_size: 7197687124 - config_name: Digital_Ebook_Purchase_v1_00 features: - name: marketplace dtype: string - name: customer_id dtype: string - name: review_id dtype: string - name: product_id dtype: string - name: product_parent dtype: string - name: product_title dtype: string - name: product_category dtype: string - name: star_rating dtype: int32 - name: helpful_votes dtype: int32 - name: total_votes dtype: int32 - name: vine dtype: class_label: names: '0': 'N' '1': 'Y' - name: verified_purchase dtype: class_label: names: '0': 'N' '1': 'Y' - name: review_headline dtype: string - name: review_body dtype: string - name: review_date dtype: string splits: - name: train num_bytes: 7302303804 num_examples: 12520722 download_size: 2689739299 dataset_size: 7302303804 duplicated_from: amazon_us_reviews --- # Dataset Card for "amazon_us_reviews" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://s3.amazonaws.com/amazon-reviews-pds/readme.html](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 32377.29 MB - **Size of the generated dataset:** 82820.19 MB - **Total amount of disk used:** 115197.49 MB ### Dataset Summary Amazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the Amazon.com website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews. Over 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters). Each Dataset contains the following columns : marketplace - 2 letter country code of the marketplace where the review was written. customer_id - Random identifier that can be used to aggregate reviews written by a single author. review_id - The unique ID of the review. product_id - The unique Product ID the review pertains to. In the multilingual dataset the reviews for the same product in different countries can be grouped by the same product_id. product_parent - Random identifier that can be used to aggregate reviews for the same product. product_title - Title of the product. product_category - Broad product category that can be used to group reviews (also used to group the dataset into coherent parts). star_rating - The 1-5 star rating of the review. helpful_votes - Number of helpful votes. total_votes - Number of total votes the review received. vine - Review was written as part of the Vine program. verified_purchase - The review is on a verified purchase. review_headline - The title of the review. review_body - The review text. review_date - The date the review was written. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### Apparel_v1_00 - **Size of downloaded dataset files:** 648.64 MB - **Size of the generated dataset:** 2254.36 MB - **Total amount of disk used:** 2903.00 MB An example of 'train' looks as follows. ``` { "customer_id": "45223824", "helpful_votes": 0, "marketplace": "US", "product_category": "Apparel", "product_id": "B016PUU3VO", "product_parent": "893588059", "product_title": "Fruit of the Loom Boys' A-Shirt (Pack of 4)", "review_body": "I ordered the same size as I ordered last time, and these shirts were much larger than the previous order. They were also about 6 inches longer. It was like they sent men's shirts instead of boys' shirts. I'll be returning these...", "review_date": "2015-01-01", "review_headline": "Sizes not correct, too big overall and WAY too long", "review_id": "R1N3Z13931J3O9", "star_rating": 2, "total_votes": 0, "verified_purchase": 1, "vine": 0 } ``` #### Automotive_v1_00 - **Size of downloaded dataset files:** 582.15 MB - **Size of the generated dataset:** 1518.88 MB - **Total amount of disk used:** 2101.03 MB An example of 'train' looks as follows. ``` { "customer_id": "16825098", "helpful_votes": 0, "marketplace": "US", "product_category": "Automotive", "product_id": "B000E4PCGE", "product_parent": "694793259", "product_title": "00-03 NISSAN SENTRA MIRROR RH (PASSENGER SIDE), Power, Non-Heated (2000 00 2001 01 2002 02 2003 03) NS35ER 963015M000", "review_body": "Product was as described, new and a great look. Only bad thing is that one of the screws was stripped so I couldn't tighten all three.", "review_date": "2015-08-31", "review_headline": "new and a great look. Only bad thing is that one of ...", "review_id": "R2RUIDUMDKG7P", "star_rating": 3, "total_votes": 0, "verified_purchase": 1, "vine": 0 } ``` #### Baby_v1_00 - **Size of downloaded dataset files:** 357.40 MB - **Size of the generated dataset:** 956.30 MB - **Total amount of disk used:** 1313.70 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "customer_id": "23299101", "helpful_votes": 2, "marketplace": "US", "product_category": "Baby", "product_id": "B00SN6F9NG", "product_parent": "3470998", "product_title": "Rhoost Nail Clipper for Baby - Ergonomically Designed and Easy to Use Baby Nail Clipper, Natural Wooden Bamboo - Baby Health and Personal Care Kits", "review_body": "\"This is an absolute MUST item to have! I was scared to death to clip my baby's nails. I tried other baby nail clippers and th...", "review_date": "2015-08-31", "review_headline": "If fits so comfortably in my hand and I feel like I have ...", "review_id": "R2DRL5NRODVQ3Z", "star_rating": 5, "total_votes": 2, "verified_purchase": 1, "vine": 0 } ``` #### Beauty_v1_00 - **Size of downloaded dataset files:** 914.08 MB - **Size of the generated dataset:** 2397.39 MB - **Total amount of disk used:** 3311.47 MB An example of 'train' looks as follows. ``` { "customer_id": "24655453", "helpful_votes": 1, "marketplace": "US", "product_category": "Beauty", "product_id": "B00SAQ9DZY", "product_parent": "292127037", "product_title": "12 New, High Quality, Amber 2 ml (5/8 Dram) Glass Bottles, with Orifice Reducer and Black Cap.", "review_body": "These are great for small mixtures for EO's, especially for traveling. I only gave this 4 stars because of the orifice reducer. The hole is so small it is hard to get the oil out. Just needs to be slightly bigger.", "review_date": "2015-08-31", "review_headline": "Good Product", "review_id": "R2A30ALEGLMCGN", "star_rating": 4, "total_votes": 1, "verified_purchase": 1, "vine": 0 } ``` #### Books_v1_00 - **Size of downloaded dataset files:** 2740.34 MB - **Size of the generated dataset:** 7193.86 MB - **Total amount of disk used:** 9934.20 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "customer_id": "49735028", "helpful_votes": 0, "marketplace": "US", "product_category": "Books", "product_id": "0664254969", "product_parent": "248307276", "product_title": "Presbyterian Creeds: A Guide to the Book of Confessions", "review_body": "\"The Presbyterian Book of Confessions contains multiple Creeds for use by the denomination. This guidebook helps he lay person t...", "review_date": "2015-08-31", "review_headline": "The Presbyterian Book of Confessions contains multiple Creeds for use ...", "review_id": "R2G519UREHRO8M", "star_rating": 3, "total_votes": 1, "verified_purchase": 1, "vine": 0 } ``` ### Data Fields The data fields are the same among all splits. #### Apparel_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. #### Automotive_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. #### Baby_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. #### Beauty_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. #### Books_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. ### Data Splits | name | train | |----------------|-------:| |Apparel_v1_00 | 5906333| |Automotive_v1_00 | 3514942| |Baby_v1_00 | 1752932| |Beauty_v1_00 | 5115666| |Books_v1_00 | 10319090| |Books_v1_01 | 6106719| |Books_v1_02 | 3105520| |Camera_v1_00 | 1801974| |Digital_Ebook_Purchase_v1_00 | 12520722| |Digital_Ebook_Purchase_v1_01 | 5101693| |Digital_Music_Purchase_v1_00 | 1688884| |Digital_Software_v1_00 | 102084| |Digital_Video_Download_v1_00 | 4057147| |Digital_Video_Games_v1_00 | 145431| |Electronics_v1_00 | 3093869| |Furniture_v1_00 | 792113| |Gift_Card_v1_00 | 149086| |Grocery_v1_00 | 2402458| |Health_Personal_Care_v1_00 | 5331449| |Home_Entertainment_v1_00 | 705889| |Home_Improvement_v1_00 | 2634781| |Home_v1_00 | 6221559| |Jewelry_v1_00 | 1767753| |Kitchen_v1_00 | 4880466| |Lawn_and_Garden_v1_00 | 2557288| |Luggage_v1_00 | 348657| |Major_Appliances_v1_00 | 96901| |Mobile_Apps_v1_00 | 5033376| |Mobile_Electronics_v1_00 | 104975| |Music_v1_00 | 4751577| |Musical_Instruments_v1_00 | 904765| |Office_Products_v1_00 | 2642434| |Outdoors_v1_00 | 2302401| |PC_v1_00 | 6908554| |Personal_Care_Appliances_v1_00 | 85981| |Pet_Products_v1_00 | 2643619| |Shoes_v1_00 | 4366916| |Software_v1_00 | 341931| |Sports_v1_00 | 4850360| |Tools_v1_00 | 1741100| |Toys_v1_00 | 4864249| |Video_DVD_v1_00 | 5069140| |Video_Games_v1_00 | 1785997| |Video_v1_00 | 380604| |Watches_v1_00 | 960872| |Wireless_v1_00 | 9002021| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information https://s3.amazonaws.com/amazon-reviews-pds/LICENSE.txt By accessing the Amazon Customer Reviews Library ("Reviews Library"), you agree that the Reviews Library is an Amazon Service subject to the [Amazon.com Conditions of Use](https://www.amazon.com/gp/help/customer/display.html/ref=footer_cou?ie=UTF8&nodeId=508088) and you agree to be bound by them, with the following additional conditions: In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Library for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Library or its contents, including use of the Reviews Library for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Library with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Library. If you violate any of the foregoing conditions, your license to access and use the Reviews Library will automatically terminate without prejudice to any of the other rights or remedies Amazon may have. ### Citation Information No citation information. ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
cedr
2023-01-25T14:27:50.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ru", "license:apache-2.0", "emotion-classification", "region:us" ]
null
This new dataset is designed to solve emotion recognition task for text data in Russian. The Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 sentences in Russian labeled for 5 emotion categories. The data collected from different sources: posts of the LiveJournal social network, texts of the online news agency Lenta.ru, and Twitter microblog posts. There are two variants of the corpus: main and enriched. The enriched variant is include tokenization and lemmatization. Dataset with predefined train/test splits.
@article{sboev2021data, title={Data-Driven Model for Emotion Detection in Russian Texts}, author={Sboev, Alexander and Naumov, Aleksandr and Rybka, Roman}, journal={Procedia Computer Science}, volume={190}, pages={637--642}, year={2021}, publisher={Elsevier} }
null
4
635
--- annotations_creators: - crowdsourced language_creators: - found language: - ru license: - apache-2.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification - multi-label-classification pretty_name: The Corpus for Emotions Detecting in Russian-language text sentences (CEDR) tags: - emotion-classification dataset_info: - config_name: main features: - name: text dtype: string - name: labels sequence: class_label: names: '0': joy '1': sadness '2': surprise '3': fear '4': anger - name: source dtype: string splits: - name: train num_bytes: 1418355 num_examples: 7528 - name: test num_bytes: 350275 num_examples: 1882 download_size: 693026 dataset_size: 1768630 - config_name: enriched features: - name: text dtype: string - name: labels sequence: class_label: names: '0': joy '1': sadness '2': surprise '3': fear '4': anger - name: source dtype: string - name: sentences list: list: - name: forma dtype: string - name: lemma dtype: string splits: - name: train num_bytes: 4792366 num_examples: 7528 - name: test num_bytes: 1182343 num_examples: 1882 download_size: 1822522 dataset_size: 5974709 --- # Dataset Card for [cedr] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/sag111/CEDR) - **Repository:** [GitHub](https://github.com/sag111/CEDR) - **Paper:** [ScienceDirect](https://www.sciencedirect.com/science/article/pii/S1877050921013247) - **Leaderboard:** - **Point of Contact:** [@sag111](mailto:sag111@mail.ru) ### Dataset Summary The Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 comments labeled for 5 emotion categories (joy, sadness, surprise, fear, and anger). Here are 2 dataset configurations: - "main" - contains "text", "labels", and "source" features; - "enriched" - includes all "main" features and "sentences". Dataset with predefined train/test splits. ### Supported Tasks and Leaderboards This dataset is intended for multi-label emotion classification. ### Languages The data is in Russian. ## Dataset Structure ### Data Instances Each instance is a text sentence in Russian from several sources with one or more emotion annotations (or no emotion at all). An example for an instance from the dataset is shown below: ``` { 'text': 'Забавно как люди в возрасте удивляются входящим звонкам на мобильник)', 'labels': [0], 'source': 'twitter', 'sentences': [ [ {'forma': 'Забавно', 'lemma': 'Забавно'}, {'forma': 'как', 'lemma': 'как'}, {'forma': 'люди', 'lemma': 'человек'}, {'forma': 'в', 'lemma': 'в'}, {'forma': 'возрасте', 'lemma': 'возраст'}, {'forma': 'удивляются', 'lemma': 'удивляться'}, {'forma': 'входящим', 'lemma': 'входить'}, {'forma': 'звонкам', 'lemma': 'звонок'}, {'forma': 'на', 'lemma': 'на'}, {'forma': 'мобильник', 'lemma': 'мобильник'}, {'forma': ')', 'lemma': ')'} ] ] } ``` Emotion label codes: {0: "joy", 1: "sadness", 2: "surprise", 3: "fear", 4: "anger"} ### Data Fields The main configuration includes: - text: the text of the sentence; - labels: the emotion annotations; - source: the tag name of the corresponding source In addition to the above, the raw data includes: - sentences: text tokenized and lemmatized with [udpipe](https://ufal.mff.cuni.cz/udpipe) - 'forma': the original word form; - 'lemma': the lemma of this word ### Data Splits The dataset includes a set of train/test splits. with 7528, and 1882 examples respectively. ## Dataset Creation ### Curation Rationale The formed dataset of examples consists of sentences in Russian from several sources (blogs, microblogs, news), which allows creating methods to analyse various types of texts. The created methodology for building the dataset based on applying a crowdsourcing service can be used to expand the number of examples to improve the accuracy of supervised classifiers. ### Source Data #### Initial Data Collection and Normalization Data was collected from several sources: posts of the Live Journal social network, texts of the online news agency Lenta.ru, and Twitter microblog posts. Only those sentences were selected that contained marker words from the dictionary of [the emotive vocabulary of the Russian language](http://lexrus.ru/default.aspx?p=2876). The authors manually formed a list of marker words for each emotion by choosing words from different categories of the dictionary. In total, 3069 sentences were selected from LiveJournal posts, 2851 sentences from Lenta.Ru, and 3490 sentencesfrom Twitter. After selection, sentences were offered to annotators for labeling. #### Who are the source language producers? Russian-speaking LiveJournal and Tweeter users, and authors of news articles on the site lenta.ru. ### Annotations #### Annotation process Annotating sentences with labels of their emotions was performed with the help of [a crowdsourcing platform](https://yandex.ru/support/toloka/index.html?lang=en). The annotators’ task was: “What emotions did the author express in the sentence?”. The annotators were allowed to put an arbitrary number of the following emotion labels: "joy", "sadness", "anger", "fear", and "surprise". If the accuracy of an annotator on the control sentences (including the trial run) became less than 70%, or if the accuracy was less than 66% over the last six control samples, the annotator was dismissed. Sentences were split into tasks and assigned to annotators so that each sentence was annotated at least three times. A label of a specific emotion was assigned to a sentence if put by more than half of the annotators. #### Who are the annotators? Only those of the 30% of the best-performing active users (by the platform’s internal rating) who spoke Russian and were over 18 years old were allowed into the annotation process. Moreover, before a platform user could be employed as an annotator, they underwent a training task, after which they were to mark 25 trial samples with more than 80% agreement compared to the annotation that the authors had performed themselves. ### Personal and Sensitive Information The text of the sentences may contain profanity. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Researchers at AI technology lab at NRC "Kurchatov Institute". See the author [list](https://www.sciencedirect.com/science/article/pii/S1877050921013247). ### Licensing Information The GitHub repository which houses this dataset has an Apache License 2.0. ### Citation Information If you have found our results helpful in your work, feel free to cite our publication. This is an updated version of the dataset, the collection and preparation of which is described here: ``` @article{sboev2021data, title={Data-Driven Model for Emotion Detection in Russian Texts}, author={Sboev, Alexander and Naumov, Aleksandr and Rybka, Roman}, journal={Procedia Computer Science}, volume={190}, pages={637--642}, year={2021}, publisher={Elsevier} } ``` ### Contributions Thanks to [@naumov-al](https://github.com/naumov-al) for adding this dataset.
seungheondoh/LP-MusicCaps-MTT
2023-08-04T10:39:28.000Z
[ "size_categories:10K<n<100K", "language:en", "license:mit", "art", "music", "text-to-music", "music-to-text", "arxiv:2307.16372", "region:us" ]
seungheondoh
null
null
null
1
635
--- license: mit language: - en tags: - art - music - text-to-music - music-to-text pretty_name: LP-MusicCaps-MTT size_categories: - 10K<n<100K --- ====================================== **!important**: Be careful when using `caption_attribute_prediction` (We don't recommend to use)! ====================================== # Dataset Card for LP-MusicCaps-MTT ## Dataset Description - **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps) - **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372) ## Dataset Summary **LP-MusicCaps** is a Large Language Model based Pseudo Music Caption dataset for `text-to-music` and `music-to-text` tasks. We construct the music-to-caption pairs with tag-to-caption generation (using three existing multi-label tag datasets and four task instructions). The data sources are MusicCaps, Magnatagtune, and Million Song Dataset ECALS subset. - **LP-MusicCaps MTT (This Repo)**: 22k Audio with 88k Caption. We utilize 188 unique tags in the [Magnatagtune](https://mirg.city.ac.uk/codeapps/the-magnatagatune-dataset) to perform tag-to-caption generation through LLM. Magnatagtune consists of 26k music clips from 5,223 unique songs including genre, instrument, vocal, mood, perceptual tempo, origin, and sonority features. We used the full 188 tag vocabulary and did not generate captions for tracks that do not have associated tags (decreased to 22k). - [LP-MusicCaps MSD](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MSD): 0.5M Audio with 2.2M Caption - [LP-MusicCaps MC](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MC): 6k Audio with 22k Caption. ## Data Instances Each instance in LP-MusicCaps MTT (This Repo) represents multiple image-text pair information with meta-attributes: ``` { 'track_id': '1541', 'title': 'Eyes Closed (The Seldon Plan)', 'artist_name': 'Magnatune.com', 'release': 'Magnatune At The CC Salon', 'tag_top50': ['guitar', 'country', 'male', 'singing'], 'tag_top188': ['guitar', 'male singer', 'country', 'male vocals', 'male', 'singing' ], 'caption_writing': 'This country song features twangy guitar riffs and heartfelt male vocals, with a male singer singing about love and loss.', 'caption_summary': 'A male singer with a country style voice accompanies his guitar while singing.', 'caption_paraphrase': 'This male artist croons in a deep, soulful voice over the twangy sounds of his guitar, crafting a classic country tune perfect for fans of male vocals and raw, authentic singing.', 'caption_attribute_prediction': 'A twangy mix of acoustic guitar and male vocals come together in this heartfelt country song. With lyrics that evoke a sense of nostalgia, the male singer weaves a story of love and loss through his storytelling. His emotive singing grips you from start to finish, as he sings about the trials and tribulations of life. This song is a must-listen for any fan of country.', 'pseudo_attribute': ['acoustic', 'twangy', 'heartfelt', 'storytelling', 'nostalgic' ], 'path': 'e/magnatune_com-magnatune_at_the_cc_salon-01-eyes_closed_the_seldon_plan-30-59.mp3' } ``` ## Pseudo Caption Example: Input Tags: *"video game theme, no singer, instrumental, analog sounding, small keyboard, beatboxing, playful, cheerful, groovy"* Output Pseudo Captions *"instrumental track has a joyful and playful vibe, perfect for a video game theme. With no singer, the analog-sounding music features a small keyboard and beatboxing, creating a groovy and cheerful atmosphere"* [More Information for pseudo caption generation](https://github.com/seungheondoh/lp-music-caps/blob/main/lpmc/llm_captioning/generate.py) ## Data Fields | Name | Type | Description | |------------------------------|-----------------|----------------------------------------------------------------------| | track_id | string | Unique identifier for the track | | title | string | Title of the song | | artist_name | string | Name of the artist performing the song | | release | string | Release name or album name of the song | | tag_top50 | list of strings | List of top 50 tags associated with the song | | tag_top188 | list of strings | List of top 188 tags associated with the song | | caption_writing | string | Pseudo caption generated through a writing instruction | | caption_summary | string | Pseudo caption generated through a summary instruction | | caption_paraphrase | string | Pseudo caption generated through a paraphrase instruction | | caption_attribute_prediction | string | Pseudo caption generated through an attribute_prediction instruction | | pseudo_attribute | list of strings | List of pseudo-attributes used in caption_attribute_prediction | | path | string | File path or location of the audio clip | ## Data Splits We used the full 188 tag vocabulary and did not generate captions for tracks that do not have associated tags (26k => 22k). 4K examples have empty tag and caption. - train: 18706 - valid: 1825 - test: 5329 ## Considerations for Using the Data The LP-MusicCaps dataset is recommended to be used for research purposes. Due to the wrong labeling issue, we recommend not using caption_attribute_prediction and pseudo_attribute unless it is specifically for large-scale pretraining. Additionally, the field "is_crawled" indicates the samples used in the reference paper mentioned below. ## Discussion of Biases It will be described in a paper to be released soon. ## Other Known Limitations It will be described in a paper to be released soon.
Siddharth63/biological_dataset
2023-09-11T14:01:11.000Z
[ "license:other", "region:us" ]
Siddharth63
null
null
null
0
634
--- license: other dataset_info: features: - name: index dtype: string - name: text dtype: string - name: doi dtype: string splits: - name: train num_bytes: 30985524742.471012 num_examples: 22538431 - name: validation num_bytes: 3442837304.52899 num_examples: 2504271 download_size: 20157724058 dataset_size: 34428362047.0 ---
TREC-AToMiC/AToMiC-Images-v0.2
2023-02-14T21:29:39.000Z
[ "size_categories:100M<n<1B", "license:cc-by-sa-4.0", "arxiv:2103.01913", "region:us" ]
TREC-AToMiC
null
null
null
1
633
--- dataset_info: features: - name: image_url dtype: string - name: image_id dtype: string - name: language sequence: string - name: caption_reference_description sequence: string - name: caption_alt_text_description sequence: string - name: caption_attribution_description sequence: string - name: image dtype: image splits: - name: train num_bytes: 180043531167.75 num_examples: 11019202 download_size: 174258428914 dataset_size: 180043531167.75 license: cc-by-sa-4.0 size_categories: - 100M<n<1B --- # Dataset Card for "AToMiC-All-Images_wi-pixels" ## Dataset Description - **Homepage:** [AToMiC homepage](https://trec-atomic.github.io/) - **Source:** [WIT](https://github.com/google-research-datasets/wit) - **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning](https://arxiv.org/abs/2103.01913) ### Languages The dataset contains 108 languages in Wikipedia. ### Data Instances Each instance is an image, its representation in bytes, and its associated captions. ### Intended Usage 1. Image collection for Text-to-Image retrieval 2. Image--Caption Retrieval/Generation/Translation ### Licensing Information [CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/) ### Citation Information TBA ### Acknowledgement Thanks to: [img2dataset](https://github.com/rom1504/img2dataset) [Datasets](https://github.com/huggingface/datasets) [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
JonasGeiping/the_pile_WordPiecex32768_2efdb9d060d1ae95faf952ec1a50f020
2023-06-13T16:25:54.000Z
[ "arxiv:2212.14034", "arxiv:2101.00027", "arxiv:2201.07311", "region:us" ]
JonasGeiping
null
null
null
0
631
--- dataset_info: features: - name: input_ids sequence: int32 splits: - name: train num_bytes: 43860000000 num_examples: 85000000 download_size: 24001057282 dataset_size: 43860000000 annotations_creators: - no-annotation language_creators: - found language: - en license: other multilinguality: - monolingual pretty_name: pretokenized,filtered,sorted subset of the Pile size_categories: - 10B<n<100B source_datasets: - the-pile task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: the-pile-cramming --- # Dataset Card for "the_pile_WordPiecex32768_2efdb9d060d1ae95faf952ec1a50f020" ## Dataset Description - **Repository:** https://github.com/JonasGeiping/cramming - **Paper:** https://arxiv.org/abs/2212.14034 - **Raw Data Source Paper:** [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027) - **Raw Data Source Datasheet:** [Datasheet for the Pile](https://arxiv.org/abs/2201.07311) ### Dataset Summary This is a preprocessed, tokenized dataset for the cramming-project. Use only with the tokenizer uploaded here. This version is `2efdb9d060d1ae95faf952ec1a50f020`, which corresponds to a specific dataset construction setup, described below. The raw data source is the Pile, a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. ### Languages This dataset is in English (`EN`). ### Data Splits This preprocessed subset contains only a train split. ## Dataset Creation The configuration to create this dataset with the cramming project code (https://github.com/JonasGeiping/cramming) is ``` # This is a slice of the pile name: the_pile defaults: - sources: - the_pile # # Preprocessing normalizer: force_lowercase: True strip_accents: True force_english_keyboard: True whitespace_escape: False tokenizer: WordPiece vocab_size: 32768 # Dataset Formation seq_length: 128 include_cls_token_in_corpus: False include_sep_token_in_corpus: True use_type_ids: False max_entries_in_raw_dataset: 16e6 max_seq_in_tokenized_dataset: 85e6 # Data Cleaning: named_entity_simplification: False remove_whitespaces: False remove_trash: True trash_cutoff: 0.25 deduplicate_entries: False deduplication_threshold: 75 # Data Order: ordering: sentence-length-curriculum ``` ## Considerations for Using the Data Limitations and bias: This training data was further filtered and sorted beyond the normal preprocessing. These modifications were not tested for unintended consequences. ## Additional Information ### Dataset Curators This dataset is a filtered, sorted and preprocessed subset of the the-Pile made by Jonas Geiping . The original dataset was primarily curated by Leo Gao and Stella Biderman, with assistance from other authors of the Pile paper. ### Licensing Information Please refer to the specific license depending on the subset you use at https://huggingface.co/datasets/EleutherAI/pile ### Citation Information Filtered version for the cramming project: ``` @article{geiping_cramming_2022, title = {Cramming: {{Training}} a {{Language Model}} on a {{Single GPU}} in {{One Day}}}, shorttitle = {Cramming}, author = {Geiping, Jonas and Goldstein, Tom}, year = {2022}, month = dec, eprint = {2212.14034}, primaryclass = {cs}, publisher = {{arXiv}}, doi = {10.48550/arXiv.2212.14034}, url = {http://arxiv.org/abs/2212.14034}, urldate = {2023-01-10}, archiveprefix = {arxiv}, keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning}, journal = {arxiv:2212.14034[cs]} } ``` Original Data Curation: ``` @article{gao2020pile, title={The {P}ile: An 800{GB} dataset of diverse text for language modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } @article{biderman2022datasheet, title={Datasheet for the pile}, author={Biderman, Stella and Bicheno, Kieran and Gao, Leo}, journal={arXiv preprint arXiv:2201.07311}, year={2022} } ```
castorini/mr-tydi
2022-10-12T20:25:19.000Z
[ "task_categories:text-retrieval", "multilinguality:multilingual", "language:ar", "language:bn", "language:en", "language:fi", "language:id", "language:ja", "language:ko", "language:ru", "language:sw", "language:te", "language:th", "license:apache-2.0", "region:us" ]
castorini
null
null
null
9
630
--- language: - ar - bn - en - fi - id - fi - ja - ko - ru - sw - te - th multilinguality: - multilingual task_categories: - text-retrieval license: apache-2.0 --- # Dataset Summary Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages. It is designed for monolingual retrieval, specifically to evaluate ranking with learned dense representations. This dataset stores the queries, judgements, and example training data of Mr. TyDi. To access the corpus, please refer to [castorini/mr-tydi-corpus](https://huggingface.co/datasets/castorini/mr-tydi-corpus). # Dataset Structure The only configuration here is the `language`, For each language, there are three splits: `train`, `dev`, and `test`. The negative examples from training set are sampled from the top-30 BM25 runfiles on each language. Specifically, we combine the **training** data for all languages under the `combined` configuration. An example of `train` set looks as follows: ``` { 'query_id': '1', 'query': 'When was quantum field theory developed?', 'positive_passages': [ { 'docid': '25267#12', 'title': 'Quantum field theory', 'text': 'Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.' }, ... ] 'negative_passages': [ { 'docid': '346489#8', 'title': 'Local quantum field theory', 'text': 'More recently, the approach has been further implemented to include an algebraic version of quantum field ...' }, ... ], } ``` An example of `dev` and `test` set looks as follows. We only provide the docid of positive passages here to save the space. Also no candidate passages are provided at this point. Note that to perform the retrieval, it need to be used together with [castorini/mr-tydi-corpus](https://huggingface.co/datasets/castorini/mr-tydi-corpus) ``` { 'query_id': '0', 'query': 'Is Creole a pidgin of French?', 'positive_passages': [ { 'docid': '3716905#1', 'title': '', 'text': '' }, ... ] } ``` # Load Dataset An example to load the dataset: ``` language = 'english' # to load all train, dev and test sets dataset = load_dataset('castorini/mr-tydi', language) # or to load a specific set: set_name = 'train' dataset = load_dataset('castorini/mr-tydi', language, set_name) ``` Note that the 'combined' option has only the 'train' set. # Citation Information ``` @article{mrtydi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } ```
conceptofmind/niv2_submix_original
2023-04-29T00:58:20.000Z
[ "region:us" ]
conceptofmind
null
null
null
18
630
--- dataset_info: features: - name: inputs dtype: string - name: targets dtype: string - name: task_source dtype: string - name: task_name dtype: string - name: template_type dtype: string splits: - name: train num_bytes: 13104211362 num_examples: 10066896 download_size: 7612522941 dataset_size: 13104211362 --- # Dataset Card for "niv2_submix_original" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_10000_electricity_sgosdt_l256_dim7_d3_sd0
2023-09-07T02:45:46.000Z
[ "region:us" ]
yzhuang
null
null
null
0
630
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 205720000 num_examples: 10000 - name: validation num_bytes: 205720000 num_examples: 10000 download_size: 102866704 dataset_size: 411440000 --- # Dataset Card for "autotree_automl_10000_electricity_sgosdt_l256_dim7_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hyperpartisan_news_detection
2023-06-13T07:46:19.000Z
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "bias-classification", "region:us" ]
null
Hyperpartisan News Detection was a dataset created for PAN @ SemEval 2019 Task 4. Given a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person. There are 2 parts: - byarticle: Labeled through crowdsourcing on an article basis. The data contains only articles for which a consensus among the crowdsourcing workers existed. - bypublisher: Labeled by the overall bias of the publisher as provided by BuzzFeed journalists or MediaBiasFactCheck.com.
@inproceedings{kiesel-etal-2019-semeval, title = "{S}em{E}val-2019 Task 4: Hyperpartisan News Detection", author = "Kiesel, Johannes and Mestre, Maria and Shukla, Rishabh and Vincent, Emmanuel and Adineh, Payam and Corney, David and Stein, Benno and Potthast, Martin", booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation", month = jun, year = "2019", address = "Minneapolis, Minnesota, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/S19-2145", doi = "10.18653/v1/S19-2145", pages = "829--839", abstract = "Hyperpartisan news is news that takes an extreme left-wing or right-wing standpoint. If one is able to reliably compute this meta information, news articles may be automatically tagged, this way encouraging or discouraging readers to consume the text. It is an open question how successfully hyperpartisan news detection can be automated, and the goal of this SemEval task was to shed light on the state of the art. We developed new resources for this purpose, including a manually labeled dataset with 1,273 articles, and a second dataset with 754,000 articles, labeled via distant supervision. The interest of the research community in our task exceeded all our expectations: The datasets were downloaded about 1,000 times, 322 teams registered, of which 184 configured a virtual machine on our shared task cloud service TIRA, of which in turn 42 teams submitted a valid run. The best team achieved an accuracy of 0.822 on a balanced sample (yes : no hyperpartisan) drawn from the manually tagged corpus; an ensemble of the submitted systems increased the accuracy by 0.048.", }
null
8
629
--- annotations_creators: - crowdsourced - expert-generated language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-classification task_ids: [] pretty_name: HyperpartisanNewsDetection tags: - bias-classification dataset_info: - config_name: byarticle features: - name: text dtype: string - name: title dtype: string - name: hyperpartisan dtype: bool - name: url dtype: string - name: published_at dtype: string splits: - name: train num_bytes: 2803943 num_examples: 645 download_size: 1000352 dataset_size: 2803943 - config_name: bypublisher features: - name: text dtype: string - name: title dtype: string - name: hyperpartisan dtype: bool - name: url dtype: string - name: published_at dtype: string - name: bias dtype: class_label: names: '0': right '1': right-center '2': least '3': left-center '4': left splits: - name: train num_bytes: 2805711609 num_examples: 600000 - name: validation num_bytes: 960356598 num_examples: 150000 download_size: 1003195420 dataset_size: 5611423218 --- # Dataset Card for "hyperpartisan_news_detection" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://pan.webis.de/semeval19/semeval19-web/](https://pan.webis.de/semeval19/semeval19-web/) - **Repository:** https://github.com/pan-webis-de/pan-code/tree/master/semeval19 - **Paper:** https://aclanthology.org/S19-2145 - **Data:** https://doi.org/10.5281/zenodo.1489920 - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.00 GB - **Size of the generated dataset:** 5.61 GB - **Total amount of disk used:** 6.62 GB ### Dataset Summary Hyperpartisan News Detection was a dataset created for PAN @ SemEval 2019 Task 4. Given a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person. There are 2 parts: - byarticle: Labeled through crowdsourcing on an article basis. The data contains only articles for which a consensus among the crowdsourcing workers existed. - bypublisher: Labeled by the overall bias of the publisher as provided by BuzzFeed journalists or MediaBiasFactCheck.com. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### byarticle - **Size of downloaded dataset files:** 1.00 MB - **Size of the generated dataset:** 2.80 MB - **Total amount of disk used:** 3.81 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "hyperpartisan": true, "published_at": "2020-01-01", "text": "\"<p>This is a sample article which will contain lots of text</p>\\n \\n<p>Lorem ipsum dolor sit amet, consectetur adipiscing el...", "title": "Example article 1", "url": "http://www.example.com/example1" } ``` #### bypublisher - **Size of downloaded dataset files:** 1.00 GB - **Size of the generated dataset:** 5.61 GB - **Total amount of disk used:** 6.61 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "bias": 3, "hyperpartisan": false, "published_at": "2020-01-01", "text": "\"<p>This is a sample article which will contain lots of text</p>\\n \\n<p>Phasellus bibendum porta nunc, id venenatis tortor fi...", "title": "Example article 4", "url": "https://example.com/example4" } ``` ### Data Fields The data fields are the same among all splits. #### byarticle - `text`: a `string` feature. - `title`: a `string` feature. - `hyperpartisan`: a `bool` feature. - `url`: a `string` feature. - `published_at`: a `string` feature. #### bypublisher - `text`: a `string` feature. - `title`: a `string` feature. - `hyperpartisan`: a `bool` feature. - `url`: a `string` feature. - `published_at`: a `string` feature. - `bias`: a classification label, with possible values including `right` (0), `right-center` (1), `least` (2), `left-center` (3), `left` (4). ### Data Splits #### byarticle | |train| |---------|----:| |byarticle| 645| #### bypublisher | |train |validation| |-----------|-----:|---------:| |bypublisher|600000| 150000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The collection (including labels) are licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). ### Citation Information ``` @inproceedings{kiesel-etal-2019-semeval, title = "{S}em{E}val-2019 Task 4: Hyperpartisan News Detection", author = "Kiesel, Johannes and Mestre, Maria and Shukla, Rishabh and Vincent, Emmanuel and Adineh, Payam and Corney, David and Stein, Benno and Potthast, Martin", booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation", month = jun, year = "2019", address = "Minneapolis, Minnesota, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/S19-2145", doi = "10.18653/v1/S19-2145", pages = "829--839", abstract = "Hyperpartisan news is news that takes an extreme left-wing or right-wing standpoint. If one is able to reliably compute this meta information, news articles may be automatically tagged, this way encouraging or discouraging readers to consume the text. It is an open question how successfully hyperpartisan news detection can be automated, and the goal of this SemEval task was to shed light on the state of the art. We developed new resources for this purpose, including a manually labeled dataset with 1,273 articles, and a second dataset with 754,000 articles, labeled via distant supervision. The interest of the research community in our task exceeded all our expectations: The datasets were downloaded about 1,000 times, 322 teams registered, of which 184 configured a virtual machine on our shared task cloud service TIRA, of which in turn 42 teams submitted a valid run. The best team achieved an accuracy of 0.822 on a balanced sample (yes : no hyperpartisan) drawn from the manually tagged corpus; an ensemble of the submitted systems increased the accuracy by 0.048.", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
lamini/lamini_docs_evaluation
2023-07-24T03:08:13.000Z
[ "region:us" ]
lamini
null
null
null
0
629
--- dataset_info: features: - name: predicted_answer dtype: string - name: target_answer dtype: string splits: - name: train num_bytes: 744520 num_examples: 139 download_size: 86086 dataset_size: 744520 --- # Dataset Card for "lamini_docs_evaluation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuggingFaceM4/SEED
2023-08-23T13:32:09.000Z
[ "region:us" ]
HuggingFaceM4
null
null
null
2
626
--- configs: - config_name: Instance_Attributes data_files: - split: test path: Instance_Attributes/test-* - config_name: Instance_Identity data_files: - split: test path: Instance_Identity/test-* - config_name: Instance_Interaction data_files: - split: test path: Instance_Interaction/test-* - config_name: Instance_Location data_files: - split: test path: Instance_Location/test-* - config_name: Instances_Counting data_files: - split: test path: Instances_Counting/test-* - config_name: Scene_Understanding data_files: - split: test path: Scene_Understanding/test-* - config_name: Spatial_Relation data_files: - split: test path: Spatial_Relation/test-* - config_name: Text_Understanding data_files: - split: test path: Text_Understanding/test-* - config_name: Visual_Reasoning data_files: - split: test path: Visual_Reasoning/test-* - config_name: default data_files: - split: test path: data/test-* dataset_info: - config_name: Instance_Attributes features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 1334222748.4732733 num_examples: 4649 download_size: 0 dataset_size: 1334222748.4732733 - config_name: Instance_Identity features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 584470534.4340912 num_examples: 1831 download_size: 0 dataset_size: 584470534.4340912 - config_name: Instance_Interaction features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 30580182.345886324 num_examples: 97 download_size: 29830492 dataset_size: 30580182.345886324 - config_name: Instance_Location features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 309244446.6420291 num_examples: 978 download_size: 0 dataset_size: 309244446.6420291 - config_name: Instances_Counting features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 659598672.0028641 num_examples: 2447 download_size: 712591981 dataset_size: 659598672.0028641 - config_name: Scene_Understanding features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 967763011.0467318 num_examples: 3158 download_size: 960725386 dataset_size: 967763011.0467318 - config_name: Spatial_Relation features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 197810012.16749808 num_examples: 657 download_size: 185916519 dataset_size: 197810012.16749808 - config_name: Text_Understanding features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 16869944.571137495 num_examples: 85 download_size: 15415331 dataset_size: 16869944.571137495 - config_name: Visual_Reasoning features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 114655703.95348836 num_examples: 331 download_size: 111131917 dataset_size: 114655703.95348836 - config_name: default features: - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: choice_a dtype: string - name: choice_b dtype: string - name: choice_c dtype: string - name: choice_d dtype: string - name: question dtype: string - name: question_type_id dtype: string - name: image dtype: image splits: - name: test num_bytes: 3877231682.444 num_examples: 14233 download_size: 4251234968 dataset_size: 3877231682.444 --- # Dataset Card for "SEED" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pg19
2023-07-28T09:21:25.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:1911.05507", "region:us" ]
null
This repository contains the PG-19 language modeling benchmark. It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919. It also contains metadata of book titles and publication dates. PG-19 is over double the size of the Billion Word benchmark and contains documents that are 20X longer, on average, than the WikiText long-range language modelling benchmark. Books are partitioned into a train, validation, and test set. Book metadata is stored in metadata.csv which contains (book_id, short_book_title, publication_date). Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mapping rare words to an UNK token --- but instead release the data as an open-vocabulary benchmark. The only processing of the text that has been applied is the removal of boilerplate license text, and the mapping of offensive discriminatory words as specified by Ofcom to placeholder tokens. Users are free to model the data at the character-level, subword-level, or via any mechanism that can model an arbitrary string of text. To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table. One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
@article{raecompressive2019, author = {Rae, Jack W and Potapenko, Anna and Jayakumar, Siddhant M and Hillier, Chloe and Lillicrap, Timothy P}, title = {Compressive Transformers for Long-Range Sequence Modelling}, journal = {arXiv preprint}, url = {https://arxiv.org/abs/1911.05507}, year = {2019}, }
null
23
625
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation task_ids: - language-modeling paperswithcode_id: pg-19 pretty_name: PG-19 dataset_info: features: - name: short_book_title dtype: string - name: publication_date dtype: int32 - name: url dtype: string - name: text dtype: string splits: - name: train num_bytes: 11453688452 num_examples: 28602 - name: validation num_bytes: 17402295 num_examples: 50 - name: test num_bytes: 40482852 num_examples: 100 download_size: 11740397875 dataset_size: 11511573599 --- # Dataset Card for "pg19" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/deepmind/pg19](https://github.com/deepmind/pg19) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Compressive Transformers for Long-Range Sequence Modelling](https://arxiv.org/abs/1911.05507) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 11.74 GB - **Size of the generated dataset:** 11.51 GB - **Total amount of disk used:** 23.25 GB ### Dataset Summary This repository contains the PG-19 language modeling benchmark. It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919. It also contains metadata of book titles and publication dates. PG-19 is over double the size of the Billion Word benchmark and contains documents that are 20X longer, on average, than the WikiText long-range language modelling benchmark. Books are partitioned into a train, validation, and test set. Book metadata is stored in metadata.csv which contains (book_id, short_book_title, publication_date). Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mapping rare words to an UNK token --- but instead release the data as an open-vocabulary benchmark. The only processing of the text that has been applied is the removal of boilerplate license text, and the mapping of offensive discriminatory words as specified by Ofcom to placeholder tokens. Users are free to model the data at the character-level, subword-level, or via any mechanism that can model an arbitrary string of text. To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table. One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 11.74 GB - **Size of the generated dataset:** 11.51 GB - **Total amount of disk used:** 23.25 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "publication_date": 1907, "short_book_title": "La Fiammetta by Giovanni Boccaccio", "text": "\"\\n\\n\\n\\nProduced by Ted Garvin, Dave Morgan and PG Distributed Proofreaders\\n\\n\\n\\n\\nLA FIAMMETTA\\n\\nBY\\n\\nGIOVANNI BOCCACCIO\\n...", "url": "http://www.gutenberg.org/ebooks/10006" } ``` ### Data Fields The data fields are the same among all splits. #### default - `short_book_title`: a `string` feature. - `publication_date`: a `int32` feature. - `url`: a `string` feature. - `text`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|28602| 50| 100| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The dataset is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.html). ### Citation Information ``` @article{raecompressive2019, author = {Rae, Jack W and Potapenko, Anna and Jayakumar, Siddhant M and Hillier, Chloe and Lillicrap, Timothy P}, title = {Compressive Transformers for Long-Range Sequence Modelling}, journal = {arXiv preprint}, url = {https://arxiv.org/abs/1911.05507}, year = {2019}, } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lucidrains](https://github.com/lucidrains), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
phiyodr/coco2017
2023-06-26T11:40:47.000Z
[ "task_categories:image-to-text", "task_ids:image-captioning", "size_categories:100K<n<1M", "language:en", "coco", "image-captioning", "region:us" ]
phiyodr
null
null
null
0
619
--- language: - en pretty_name: COCO2017 size_categories: - 100K<n<1M task_categories: - image-to-text task_ids: - image-captioning tags: - coco - image-captioning dataset_info: features: - name: license dtype: int64 - name: file_name dtype: string - name: coco_url dtype: string - name: height dtype: int64 - name: width dtype: int64 - name: date_captured dtype: string - name: flickr_url dtype: string - name: image_id dtype: int64 - name: ids sequence: int64 - name: captions sequence: string splits: - name: train num_bytes: 64026361 num_examples: 118287 - name: validation num_bytes: 2684731 num_examples: 5000 download_size: 30170127 dataset_size: 66711092 --- # coco2017 Image-text pairs from [MS COCO2017](https://cocodataset.org/#download). ## Data origin * Data originates from [cocodataset.org](http://images.cocodataset.org/annotations/annotations_trainval2017.zip) * While `coco-karpathy` uses a dense format (with several sentences and sendids per row), `coco-karpathy-long` uses a long format with one `sentence` (aka caption) and `sendid` per row. `coco-karpathy-long` uses the first five sentences and therefore is five times as long as `coco-karpathy`. * `phiyodr/coco2017`: One row corresponds one image with several sentences. * `phiyodr/coco2017-long`: One row correspond one sentence (aka caption). There are 5 rows (sometimes more) with the same image details. ## Format ```python DatasetDict({ train: Dataset({ features: ['license', 'file_name', 'coco_url', 'height', 'width', 'date_captured', 'flickr_url', 'image_id', 'ids', 'captions'], num_rows: 118287 }) validation: Dataset({ features: ['license', 'file_name', 'coco_url', 'height', 'width', 'date_captured', 'flickr_url', 'image_id', 'ids', 'captions'], num_rows: 5000 }) }) ``` ## Usage * Download image data and unzip ```bash cd PATH_TO_IMAGE_FOLDER wget http://images.cocodataset.org/zips/train2017.zip wget http://images.cocodataset.org/zips/val2017.zip #wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip # zip not needed: everything you need is in load_dataset("phiyodr/coco2017") unzip train2017.zip unzip val2017.zip ``` * Load dataset in Python ```python import os from datasets import load_dataset PATH_TO_IMAGE_FOLDER = "COCO2017" def create_full_path(example): """Create full path to image using `base_path` to COCO2017 folder.""" example["image_path"] = os.path.join(PATH_TO_IMAGE_FOLDER, example["filepath"], example["filename"]) return example dataset = load_dataset("phiyodr/coco2017") dataset = dataset.map(create_full_path) ```
nahyeon00/mixsnips_clean
2023-07-19T08:38:38.000Z
[ "region:us" ]
nahyeon00
null
null
null
0
619
--- dataset_info: features: - name: token sequence: string - name: tag sequence: string - name: intent sequence: string splits: - name: train num_bytes: 16319528 num_examples: 39776 - name: validation num_bytes: 915087 num_examples: 2198 - name: test num_bytes: 902367 num_examples: 2199 download_size: 3076227 dataset_size: 18136982 --- # Dataset Card for "mixsnips_clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
BeIR/dbpedia-entity
2022-10-23T06:03:56.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
null
2
617
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
facat/sci-llm-new-512
2023-09-15T06:31:11.000Z
[ "region:us" ]
facat
null
null
null
0
617
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: train_attack path: data/train_attack-* - split: train_old path: data/train_old-* - split: train_new path: data/train_new-* - split: test path: data/test-* - split: test2 path: data/test2-* dataset_info: features: - name: prompt dtype: string - name: context dtype: string - name: chosen dtype: string - name: A dtype: string - name: B dtype: string - name: C dtype: string - name: D dtype: string splits: - name: train num_bytes: 281782031 num_examples: 95835 - name: train_attack num_bytes: 281782031 num_examples: 95835 - name: train_old num_bytes: 169071782 num_examples: 40859 - name: train_new num_bytes: 112703377 num_examples: 54976 - name: test num_bytes: 917099 num_examples: 200 - name: test2 num_bytes: 1111116 num_examples: 200 download_size: 423207291 dataset_size: 847367436 --- # Dataset Card for "sci-llm-new-512" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/a77d2949
2023-09-20T09:17:02.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
616
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 168 num_examples: 10 download_size: 1322 dataset_size: 168 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "a77d2949" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
spacemanidol/dset-corpus
2023-09-27T19:17:42.000Z
[ "region:us" ]
spacemanidol
null
null
0
616
Entry not found
huggan/pokemon
2022-04-01T11:50:45.000Z
[ "region:us" ]
huggan
null
null
null
13
615
Source: https://www.kaggle.com/datasets/djilax/pkmn-image-dataset
WizardLM/WizardLM_evol_instruct_V2_196k
2023-08-24T03:55:18.000Z
[ "arxiv:2308.09583", "arxiv:2304.12244", "arxiv:2306.08568", "region:us" ]
WizardLM
null
null
null
141
615
## News - 🔥 🔥 🔥 [08/11/2023] We release **WizardMath** Models. - 🔥 Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**. - 🔥 Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM. - 🔥 Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM. | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| | ----- |------| ---- |------|-------| ----- | ----- | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>| <font size=4> | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>| | ----- |------| ---- |------|-------| ----- | ----- | ----- | | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>| | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> | | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>| | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>| | <sup>WizardCoder-15B-V1.0</sup> | <sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | || |<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> | </font> **Repository**: https://github.com/nlpxucan/WizardLM **Twitter**: https://twitter.com/WizardLM_AI/status/1669364947606982656 This datasets contains 143K mixture evolved data of Alpaca and ShareGPT. This is the latest optimized version of Evol-Instruct training data of WizardLM model. Due to the data usage license, please **merge** the original [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) with this one to get the **final full-dataset**, which would consist of around 196k rows of data.
cyrilzhang/TinyStories2-ascii-val-1k
2023-09-27T12:44:27.000Z
[ "region:us" ]
cyrilzhang
null
null
null
0
613
--- configs: - config_name: default data_files: - split: validation path: data/validation-* dataset_info: features: - name: text dtype: string splits: - name: validation num_bytes: 793968 num_examples: 1000 download_size: 410730 dataset_size: 793968 --- # Dataset Card for "TinyStories2-ascii-val-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nightingal3/fig-qa
2023-06-10T18:13:33.000Z
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "arxiv:2204.12632", "region:us" ]
nightingal3
null
null
null
2
609
--- annotations_creators: - expert-generated - crowdsourced language_creators: - crowdsourced language: - en license: - mit multilinguality: - monolingual pretty_name: Fig-QA size_categories: - 10K<n<100K source_datasets: - original task_categories: - multiple-choice task_ids: - multiple-choice-qa --- # Dataset Card for Fig-QA ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Splits](#data-splits) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/nightingal3/Fig-QA - **Paper:** https://arxiv.org/abs/2204.12632 - **Leaderboard:** https://explainaboard.inspiredco.ai/leaderboards?dataset=fig_qa - **Point of Contact:** emmy@cmu.edu ### Dataset Summary This is the dataset for the paper [Testing the Ability of Language Models to Interpret Figurative Language](https://arxiv.org/abs/2204.12632). Fig-QA consists of 10256 examples of human-written creative metaphors that are paired as a Winograd schema. It can be used to evaluate the commonsense reasoning of models. The metaphors themselves can also be used as training data for other tasks, such as metaphor detection or generation. ### Supported Tasks and Leaderboards You can evaluate your models on the test set by submitting to the [leaderboard](https://explainaboard.inspiredco.ai/leaderboards?dataset=fig_qa) on Explainaboard. Click on "New" and select `qa-multiple-choice` for the task field. Select `accuracy` for the metric. You should upload results in the form of a system output file in JSON or JSONL format. ### Languages This is the English version. Multilingual version can be found [here](https://huggingface.co/datasets/cmu-lti/multi-figqa). ### Data Splits Train-{S, M(no suffix), XL}: different training set sizes Dev Test (labels not provided for test set) ## Considerations for Using the Data ### Discussion of Biases These metaphors are human-generated and may contain insults or other explicit content. Authors of the paper manually removed offensive content, but users should keep in mind that some potentially offensive content may remain in the dataset. ## Additional Information ### Licensing Information MIT License ### Citation Information If you found the dataset useful, please cite this paper: @misc{https://doi.org/10.48550/arxiv.2204.12632, doi = {10.48550/ARXIV.2204.12632}, url = {https://arxiv.org/abs/2204.12632}, author = {Liu, Emmy and Cui, Chen and Zheng, Kenneth and Neubig, Graham}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Testing the Ability of Language Models to Interpret Figurative Language}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} }
mrqa
2022-11-18T21:30:01.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|drop", "source_datasets:extended|hotpot_qa", "source_datasets:extended|natural_questions", "source_datasets:extended|race", "source_datasets:extended|search_qa", "source_datasets:extended|squad", "source_datasets:extended|trivia_qa", "language:en", "license:unknown", "arxiv:1910.09753", "arxiv:1606.05250", "arxiv:1611.09830", "arxiv:1705.03551", "arxiv:1704.05179", "arxiv:1809.09600", "arxiv:1903.00161", "arxiv:1804.07927", "arxiv:1704.04683", "arxiv:1706.04115", "region:us" ]
null
The MRQA 2019 Shared Task focuses on generalization in question answering. An effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to out-of-distribution examples — a significantly harder challenge. The dataset is a collection of 18 existing QA dataset (carefully selected subset of them) and converted to the same format (SQuAD format). Among these 18 datasets, six datasets were made available for training, six datasets were made available for development, and the final six for testing. The dataset is released as part of the MRQA 2019 Shared Task.
@inproceedings{fisch2019mrqa, title={{MRQA} 2019 Shared Task: Evaluating Generalization in Reading Comprehension}, author={Adam Fisch and Alon Talmor and Robin Jia and Minjoon Seo and Eunsol Choi and Danqi Chen}, booktitle={Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP}, year={2019}, }
null
8
607
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|drop - extended|hotpot_qa - extended|natural_questions - extended|race - extended|search_qa - extended|squad - extended|trivia_qa task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: mrqa-2019 pretty_name: MRQA 2019 dataset_info: features: - name: subset dtype: string - name: context dtype: string - name: context_tokens sequence: - name: tokens dtype: string - name: offsets dtype: int32 - name: qid dtype: string - name: question dtype: string - name: question_tokens sequence: - name: tokens dtype: string - name: offsets dtype: int32 - name: detected_answers sequence: - name: text dtype: string - name: char_spans sequence: - name: start dtype: int32 - name: end dtype: int32 - name: token_spans sequence: - name: start dtype: int32 - name: end dtype: int32 - name: answers sequence: string config_name: plain_text splits: - name: train num_bytes: 4090681873 num_examples: 516819 - name: test num_bytes: 57712177 num_examples: 9633 - name: validation num_bytes: 484107026 num_examples: 58221 download_size: 1479518355 dataset_size: 4632501076 --- # Dataset Card for MRQA 2019 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [MRQA 2019 Shared Task](https://mrqa.github.io/2019/shared.html) - **Repository:** [MRQA 2019 Github repository](https://github.com/mrqa/MRQA-Shared-Task-2019) - **Paper:** [MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension ](https://arxiv.org/abs/1910.09753) - **Leaderboard:** [Shared task](https://mrqa.github.io/2019/shared.html) - **Point of Contact:** [mrforqa@gmail.com](mrforqa@gmail.com) ### Dataset Summary The MRQA 2019 Shared Task focuses on generalization in question answering. An effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to out-of-distribution examples — a significantly harder challenge. The dataset is a collection of 18 existing QA dataset (carefully selected subset of them) and converted to the same format (SQuAD format). Among these 18 datasets, six datasets were made available for training, six datasets were made available for development, and the final six for testing. The dataset is released as part of the MRQA 2019 Shared Task. ### Supported Tasks and Leaderboards From the official repository: *The format of the task is extractive question answering. Given a question and context passage, systems must find the word or phrase in the document that best answers the question. While this format is somewhat restrictive, it allows us to leverage many existing datasets, and its simplicity helps us focus on out-of-domain generalization, instead of other important but orthogonal challenges.* *We have adapted several existing datasets from their original formats and settings to conform to our unified extractive setting. Most notably:* - *We provide only a single, length-limited context.* - *There are no unanswerable or non-span answer questions.* - *All questions have at least one accepted answer that is found exactly in the context.* *A span is judged to be an exact match if it matches the answer string after performing normalization consistent with the SQuAD dataset. Specifically:* - *The text is uncased.* - *All punctuation is stripped.* - *All articles `{a, an, the}` are removed.* - *All consecutive whitespace markers are compressed to just a single normal space `' '`.* Answers are evaluated using exact match and token-level F1 metrics. One can refer to the [mrqa_official_eval.py](https://github.com/mrqa/MRQA-Shared-Task-2019/blob/master/mrqa_official_eval.py) for evaluation. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances An examples looks like this: ``` { 'qid': 'f43c83e38d1e424ea00f8ad3c77ec999', 'subset': 'SQuAD' 'context': 'CBS broadcast Super Bowl 50 in the U.S., and charged an average of $5 million for a 30-second commercial during the game. The Super Bowl 50 halftime show was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars, who headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively. It was the third-most watched U.S. broadcast ever.', 'context_tokens': { 'offsets': [0, 4, 14, 20, 25, 28, 31, 35, 39, 41, 45, 53, 56, 64, 67, 68, 70, 78, 82, 84, 94, 105, 112, 116, 120, 122, 126, 132, 137, 140, 149, 154, 158, 168, 171, 175, 183, 188, 194, 203, 208, 216, 222, 233, 241, 245, 251, 255, 257, 261, 271, 275, 281, 286, 292, 296, 302, 307, 314, 323, 328, 330, 342, 344, 347, 351, 355, 360, 361, 366, 374, 379, 389, 393], 'tokens': ['CBS', 'broadcast', 'Super', 'Bowl', '50', 'in', 'the', 'U.S.', ',', 'and', 'charged', 'an', 'average', 'of', '$', '5', 'million', 'for', 'a', '30-second', 'commercial', 'during', 'the', 'game', '.', 'The', 'Super', 'Bowl', '50', 'halftime', 'show', 'was', 'headlined', 'by', 'the', 'British', 'rock', 'group', 'Coldplay', 'with', 'special', 'guest', 'performers', 'Beyoncé', 'and', 'Bruno', 'Mars', ',', 'who', 'headlined', 'the', 'Super', 'Bowl', 'XLVII', 'and', 'Super', 'Bowl', 'XLVIII', 'halftime', 'shows', ',', 'respectively', '.', 'It', 'was', 'the', 'third', '-', 'most', 'watched', 'U.S.', 'broadcast', 'ever', '.'] }, 'question': "Who was the main performer at this year's halftime show?", 'question_tokens': { 'offsets': [0, 4, 8, 12, 17, 27, 30, 35, 39, 42, 51, 55], 'tokens': ['Who', 'was', 'the', 'main', 'performer', 'at', 'this', 'year', "'s", 'halftime', 'show', '?'] }, 'detected_answers': { 'char_spans': [ { 'end': [201], 'start': [194] }, { 'end': [201], 'start': [194] }, { 'end': [201], 'start': [194] } ], 'text': ['Coldplay', 'Coldplay', 'Coldplay'], 'token_spans': [ { 'end': [38], 'start': [38] }, { 'end': [38], 'start': [38] }, { 'end': [38], 'start': [38] } ] }, 'answers': ['Coldplay', 'Coldplay', 'Coldplay'], } ``` ### Data Fields - `subset`: which of the dataset does this examples come from? - `context`: This is the raw text of the supporting passage. Three special token types have been inserted: `[TLE]` precedes document titles, `[DOC]` denotes document breaks, and `[PAR]` denotes paragraph breaks. The maximum length of the context is 800 tokens. - `context_tokens`: A tokenized version of the supporting passage, using spaCy. Each token is a tuple of the token string and token character offset. The maximum number of tokens is 800. - `tokens`: list of tokens. - `offets`: list of offsets. - `qas`: A list of questions for the given context. - `qid`: A unique identifier for the question. The `qid` is unique across all datasets. - `question`: The raw text of the question. - `question_tokens`: A tokenized version of the question. The tokenizer and token format is the same as for the context. - `tokens`: list of tokens. - `offets`: list of offsets. - `detected_answers`: A list of answer spans for the given question that index into the context. For some datasets these spans have been automatically detected using searching heuristics. The same answer may appear multiple times in the text --- each of these occurrences is recorded. For example, if `42` is the answer, the context `"The answer is 42. 42 is the answer."`, has two occurrences marked. - `text`: The raw text of the detected answer. - `char_spans`: Inclusive (start, end) character spans (indexing into the raw context). - `start`: start (single element) - `end`: end (single element) - `token_spans`: Inclusive (start, end) token spans (indexing into the tokenized context). - `start`: start (single element) - `end`: end (single element) ### Data Splits **Training data** | Dataset | Number of Examples | | :-----: | :------: | | [SQuAD](https://arxiv.org/abs/1606.05250) | 86,588 | | [NewsQA](https://arxiv.org/abs/1611.09830) | 74,160 | | [TriviaQA](https://arxiv.org/abs/1705.03551)| 61,688 | | [SearchQA](https://arxiv.org/abs/1704.05179)| 117,384 | | [HotpotQA](https://arxiv.org/abs/1809.09600)| 72,928 | | [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 104,071 | **Development data** This in-domain data may be used for helping develop models. | Dataset | Examples | | :-----: | :------: | | [SQuAD](https://arxiv.org/abs/1606.05250) | 10,507 | | [NewsQA](https://arxiv.org/abs/1611.09830) | 4,212 | | [TriviaQA](https://arxiv.org/abs/1705.03551)| 7,785| | [SearchQA](https://arxiv.org/abs/1704.05179)| 16,980 | | [HotpotQA](https://arxiv.org/abs/1809.09600)| 5,904 | | [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 12,836 | **Test data** The final testing data only contain out-of-domain data. | Dataset | Examples | | :-----: | :------: | | [BioASQ](http://bioasq.org/) | 1,504 | | [DROP](https://arxiv.org/abs/1903.00161) | 1,503 | | [DuoRC](https://arxiv.org/abs/1804.07927)| 1,501 | | [RACE](https://arxiv.org/abs/1704.04683) | 674 | | [RelationExtraction](https://arxiv.org/abs/1706.04115) | 2,948| | [TextbookQA](http://ai2-website.s3.amazonaws.com/publications/CVPR17_TQA.pdf)| 1,503 | From the official repository: ***Note:** As previously mentioned, the out-of-domain dataset have been modified from their original settings to fit the unified MRQA Shared Task paradigm. At a high level, the following two major modifications have been made:* *1. All QA-context pairs are extractive. That is, the answer is selected from the context and not via, e.g., multiple-choice.* *2. All contexts are capped at a maximum of `800` tokens. As a result, for longer contexts like Wikipedia articles, we only consider examples where the answer appears in the first `800` tokens.* *As a result, some splits are harder than the original datasets (e.g., removal of multiple-choice in RACE), while some are easier (e.g., restricted context length in NaturalQuestions --- we use the short answer selection). Thus one should expect different performance ranges if comparing to previous work on these datasets.* ## Dataset Creation ### Curation Rationale From the official repository: *Both train and test datasets have the same format described above, but may differ in some of the following ways:* - *Passage distribution: Test examples may involve passages from different sources (e.g., science, news, novels, medical abstracts, etc) with pronounced syntactic and lexical differences.* - *Question distribution: Test examples may emphasize different styles of questions (e.g., entity-centric, relational, other tasks reformulated as QA, etc) which may come from different sources (e.g., crowdworkers, domain experts, exam writers, etc.)* - *Joint distribution: Test examples may vary according to the relationship of the question to the passage (e.g., collected independent vs. dependent of evidence, multi-hop, etc)* ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown ### Citation Information ``` @inproceedings{fisch2019mrqa, title={{MRQA} 2019 Shared Task: Evaluating Generalization in Reading Comprehension}, author={Adam Fisch and Alon Talmor and Robin Jia and Minjoon Seo and Eunsol Choi and Danqi Chen}, booktitle={Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP}, year={2019}, } ``` ### Contributions Thanks to [@jimmycode](https://github.com/jimmycode), [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
augtoma/medqa_usmle
2023-08-11T20:50:07.000Z
[ "region:us" ]
augtoma
null
null
null
0
607
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: question dtype: string - name: answer dtype: string - name: options struct: - name: A dtype: string - name: B dtype: string - name: C dtype: string - name: D dtype: string - name: meta_info dtype: string - name: answer_idx dtype: string - name: metamap_phrases sequence: string splits: - name: train num_bytes: 15175834 num_examples: 10178 - name: test num_bytes: 1946030 num_examples: 1273 download_size: 8869925 dataset_size: 17121864 --- # Dataset Card for "medqa_usmle" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
brwac
2022-11-03T16:16:00.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:pt", "license:unknown", "region:us" ]
null
The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, which was made public for research purposes. The current corpus version, released in January 2017, is composed by 3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available solely for academic research purposes, and you agreed not to use it for any commercial applications. Manually download at https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC
@inproceedings{wagner2018brwac, title={The brwac corpus: A new open resource for brazilian portuguese}, author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline}, booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} }
null
7
604
--- annotations_creators: - no-annotation language_creators: - found language: - pt license: - unknown multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: brwac pretty_name: BrWaC dataset_info: features: - name: doc_id dtype: string - name: title dtype: string - name: uri dtype: string - name: text sequence: - name: paragraphs sequence: string splits: - name: train num_bytes: 18828421452 num_examples: 3530796 download_size: 0 dataset_size: 18828421452 --- # Dataset Card for BrWaC ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [BrWaC homepage](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) - **Repository:** [BrWaC repository](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) - **Paper:** [The brWaC Corpus: A New Open Resource for Brazilian Portuguese](https://www.aclweb.org/anthology/L18-1686/) - **Point of Contact:** [Jorge A. Wagner Filho](mailto:jawfilho@inf.ufrgs.br) ### Dataset Summary The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, which was made public for research purposes. The current corpus version, released in January 2017, is composed by 3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available solely for academic research purposes, and you agreed not to use it for any commercial applications. Manually download at https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Portuguese ## Dataset Structure ### Data Instances An example from the BrWaC dataset looks as follows: ``` { "doc_id": "netg-1afc73", "text": { "paragraphs": [ [ "Conteúdo recente" ], [ "ESPUMA MARROM CHAMADA \"NINGUÉM MERECE\"" ], [ "31 de Agosto de 2015, 7:07 , por paulo soavinski - | No one following this article yet." ], [ "Visualizado 202 vezes" ], [ "JORNAL ELETRÔNICO DA ILHA DO MEL" ], [ "Uma espuma marrom escuro tem aparecido com frequência na Praia de Fora.", "Na faixa de areia ela aparece disseminada e não chama muito a atenção.", "No Buraco do Aipo, com muitas pedras, ela aparece concentrada.", "É fácil saber que esta espuma estranha está lá, quando venta.", "Pequenos algodões de espuma começam a flutuar no espaço, pertinho da Praia do Saquinho.", "Quem pode ajudar na coleta deste material, envio a laboratório renomado e pagamento de análises, favor entrar em contato com o site." ] ] }, "title": "ESPUMA MARROM CHAMADA ‟NINGUÉM MERECE‟ - paulo soavinski", "uri": "http://blogoosfero.cc/ilhadomel/pousadasilhadomel.com.br/espuma-marrom-chamada-ninguem-merece" } ``` ### Data Fields - `doc_id`: The document ID - `title`: The document title - `uri`: URI where the document was extracted from - `text`: A list of document paragraphs (with a list of sentences in it as a list of strings) ### Data Splits The data is only split into train set with size of 3530796 samples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{wagner2018brwac, title={The brwac corpus: A new open resource for brazilian portuguese}, author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline}, booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` ### Contributions Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
emo
2023-04-05T10:05:14.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
null
In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others.
@inproceedings{chatterjee-etal-2019-semeval, title={SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text}, author={Ankush Chatterjee and Kedhar Nath Narahari and Meghana Joshi and Puneet Agrawal}, booktitle={Proceedings of the 13th International Workshop on Semantic Evaluation}, year={2019}, address={Minneapolis, Minnesota, USA}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/S19-2005}, doi={10.18653/v1/S19-2005}, pages={39--48}, abstract={In this paper, we present the SemEval-2019 Task 3 - EmoContext: Contextual Emotion Detection in Text. Lack of facial expressions and voice modulations make detecting emotions in text a challenging problem. For instance, as humans, on reading ''Why don't you ever text me!'' we can either interpret it as a sad or angry emotion and the same ambiguity exists for machines. However, the context of dialogue can prove helpful in detection of the emotion. In this task, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others. To facilitate the participation in this task, textual dialogues from user interaction with a conversational agent were taken and annotated for emotion classes after several data processing steps. A training data set of 30160 dialogues, and two evaluation data sets, Test1 and Test2, containing 2755 and 5509 dialogues respectively were released to the participants. A total of 311 teams made submissions to this task. The final leader-board was evaluated on Test2 data set, and the highest ranked submission achieved 79.59 micro-averaged F1 score. Our analysis of systems submitted to the task indicate that Bi-directional LSTM was the most common choice of neural architecture used, and most of the systems had the best performance for the Sad emotion class, and the worst for the Happy emotion class} }
null
3
603
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: emocontext pretty_name: EmoContext dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': others '1': happy '2': sad '3': angry config_name: emo2019 splits: - name: train num_bytes: 2433205 num_examples: 30160 - name: test num_bytes: 421555 num_examples: 5509 download_size: 3362556 dataset_size: 2854760 --- # Dataset Card for "emo" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.aclweb.org/anthology/S19-2005/](https://www.aclweb.org/anthology/S19-2005/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.37 MB - **Size of the generated dataset:** 2.85 MB - **Total amount of disk used:** 6.22 MB ### Dataset Summary In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### emo2019 - **Size of downloaded dataset files:** 3.37 MB - **Size of the generated dataset:** 2.85 MB - **Total amount of disk used:** 6.22 MB An example of 'train' looks as follows. ``` { "label": 0, "text": "don't worry i'm girl hmm how do i know if you are what's ur name" } ``` ### Data Fields The data fields are the same among all splits. #### emo2019 - `text`: a `string` feature. - `label`: a classification label, with possible values including `others` (0), `happy` (1), `sad` (2), `angry` (3). ### Data Splits | name |train|test| |-------|----:|---:| |emo2019|30160|5509| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{chatterjee-etal-2019-semeval, title={SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text}, author={Ankush Chatterjee and Kedhar Nath Narahari and Meghana Joshi and Puneet Agrawal}, booktitle={Proceedings of the 13th International Workshop on Semantic Evaluation}, year={2019}, address={Minneapolis, Minnesota, USA}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/S19-2005}, doi={10.18653/v1/S19-2005}, pages={39--48}, abstract={In this paper, we present the SemEval-2019 Task 3 - EmoContext: Contextual Emotion Detection in Text. Lack of facial expressions and voice modulations make detecting emotions in text a challenging problem. For instance, as humans, on reading ''Why don't you ever text me!'' we can either interpret it as a sad or angry emotion and the same ambiguity exists for machines. However, the context of dialogue can prove helpful in detection of the emotion. In this task, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others. To facilitate the participation in this task, textual dialogues from user interaction with a conversational agent were taken and annotated for emotion classes after several data processing steps. A training data set of 30160 dialogues, and two evaluation data sets, Test1 and Test2, containing 2755 and 5509 dialogues respectively were released to the participants. A total of 311 teams made submissions to this task. The final leader-board was evaluated on Test2 data set, and the highest ranked submission achieved 79.59 micro-averaged F1 score. Our analysis of systems submitted to the task indicate that Bi-directional LSTM was the most common choice of neural architecture used, and most of the systems had the best performance for the Sad emotion class, and the worst for the Happy emotion class} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lordtt13](https://github.com/lordtt13), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
lewtun/github-issues
2021-10-04T15:49:55.000Z
[ "arxiv:2005.00614", "region:us" ]
lewtun
null
null
null
4
602
# Dataset Card for GitHub Issues ## Dataset Description - **Point of Contact:** [Lewis Tunstall](lewis@huggingface.co) ### Dataset Summary GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond. ### Supported Tasks and Leaderboards For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`). - `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name). ### Languages Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,... When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available. ## Dataset Structure ### Data Instances Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples. ``` { 'example_field': ..., ... } ``` Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit. ### Data Fields List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points. - `example_field`: description of `example_field` Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions. ### Data Splits Describe and name the splits in the dataset if there are more than one. Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example: | | Tain | Valid | Test | | ----- | ------ | ----- | ---- | | Input Sentences | | | | | Average Sentence Length | | | | ## Dataset Creation ### Curation Rationale What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together? ### Source Data This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...) #### Initial Data Collection and Normalization Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process. If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name). If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used. #### Who are the source language producers? State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data. If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. Describe other people represented or mentioned in the data. Where possible, link to references for the information. ### Annotations If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs. #### Annotation process If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes. #### Who are the annotators? If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated. Describe the people or systems who originally created the annotations and their selection criteria if applicable. If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. ### Personal and Sensitive Information State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data). State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history). If efforts were made to anonymize the data, describe the anonymization process. ## Considerations for Using the Data ### Social Impact of Dataset Please discuss some of the ways you believe the use of this dataset will impact society. The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations. Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here. ### Discussion of Biases Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact. For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic. If analyses have been run quantifying these biases, please add brief summaries and links to the studies here. ### Other Known Limitations If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here. ## Additional Information ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information Provide the license and link to the license webpage if available. ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @article{article_id, author = {Author List}, title = {Dataset Paper Title}, journal = {Publication Venue}, year = {2525} } ``` If the dataset has a [DOI](https://www.doi.org/), please provide it here. ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
squad_kor_v2
2023-02-07T14:40:49.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|squad_kor_v1", "source_datasets:original", "language:ko", "license:cc-by-nd-4.0", "region:us" ]
null
KorQuAD 2.0 is a Korean question and answering dataset consisting of a total of 100,000+ pairs. There are three major differences from KorQuAD 1.0, which is the standard Korean Q & A data. The first is that a given document is a whole Wikipedia page, not just one or two paragraphs. Second, because the document also contains tables and lists, it is necessary to understand the document structured with HTML tags. Finally, the answer can be a long text covering not only word or phrase units, but paragraphs, tables, and lists. As a baseline model, BERT Multilingual is used, released by Google as an open source. It shows 46.0% F1 score, a very low score compared to 85.7% of the human F1 score. It indicates that this data is a challenging task. Additionally, we increased the performance by no-answer data augmentation. Through the distribution of this data, we intend to extend the limit of MRC that was limited to plain text to real world tasks of various lengths and formats.
@article{NODE09353166, author={Youngmin Kim,Seungyoung Lim;Hyunjeong Lee;Soyoon Park;Myungji Kim}, title={{KorQuAD 2.0: Korean QA Dataset for Web Document Machine Comprehension}}, booltitle={{Journal of KIISE 제47권 제6호}}, journal={{Journal of KIISE}}, volume={{47}}, issue={{6}}, publisher={The Korean Institute of Information Scientists and Engineers}, year={2020}, ISSN={{2383-630X}}, pages={577-586}, url={http://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE09353166}}
null
2
601
--- annotations_creators: - crowdsourced language_creators: - found language: - ko license: - cc-by-nd-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|squad_kor_v1 - original task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: null pretty_name: KorQuAD v2.1 dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answer struct: - name: text dtype: string - name: answer_start dtype: int32 - name: html_answer_start dtype: int32 - name: url dtype: string - name: raw_html dtype: string config_name: squad_kor_v2 splits: - name: train num_bytes: 17983434492 num_examples: 83486 - name: validation num_bytes: 2230543100 num_examples: 10165 download_size: 1373763305 dataset_size: 20213977592 --- # Dataset Card for KorQuAD v2.1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - [**Homepage**](https://korquad.github.io/) - [**Repository**](https://github.com/korquad/korquad.github.io/tree/master/dataset) - [**Paper**](https://korquad.github.io/dataset/KorQuAD_2.0/KorQuAD_2.0_paper.pdf) ### Dataset Summary KorQuAD 2.0 is a Korean question and answering dataset consisting of a total of 100,000+ pairs. There are three major differences from KorQuAD 1.0, which is the standard Korean Q & A data. The first is that a given document is a whole Wikipedia page, not just one or two paragraphs. Second, because the document also contains tables and lists, it is necessary to understand the document structured with HTML tags. Finally, the answer can be a long text covering not only word or phrase units, but paragraphs, tables, and lists. ### Supported Tasks and Leaderboards `question-answering` ### Languages Korean ## Dataset Structure Follows the standart SQuAD format. There is only 1 answer per question ### Data Instances An example from the data set looks as follows: ```py {'answer': {'answer_start': 3873, 'html_answer_start': 16093, 'text': '20,890 표'}, 'context': '<!DOCTYPE html>\n<html>\n<head>\n<meta>\n<title>심규언 - 위키백과, 우리 모두의 백과사전</title>\n\n\n<link>\n.....[omitted]', 'id': '36615', 'question': '심규언은 17대 지방 선거에서 몇 표를 득표하였는가?', 'raw_html': '<!DOCTYPE html>\n<html c ...[omitted]', 'title': '심규언', 'url': 'https://ko.wikipedia.org/wiki/심규언'} ``` ### Data Fields ```py {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answer': {'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None)}, 'url': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None)} ``` ### Data Splits - Train : 83486 - Validation: 10165 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data Wikipedia #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [CC BY-ND 2.0 KR](https://creativecommons.org/licenses/by-nd/2.0/kr/deed.en) ### Citation Information ``` @article{NODE09353166, author={Youngmin Kim,Seungyoung Lim;Hyunjeong Lee;Soyoon Park;Myungji Kim}, title={{KorQuAD 2.0: Korean QA Dataset for Web Document Machine Comprehension}}, booltitle={{Journal of KIISE 제47권 제6호}}, journal={{Journal of KIISE}}, volume={{47}}, issue={{6}}, publisher={The Korean Institute of Information Scientists and Engineers}, year={2020}, ISSN={{2383-630X}}, pages={577-586}, url={http://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE09353166}} ``` ### Contributions Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
mteb/emotion
2022-09-27T19:14:18.000Z
[ "language:en", "region:us" ]
mteb
null
null
null
5
601
--- language: - en --- ** Attention: There appears an overlap in train / test. I trained a model on the train set and achieved 100% acc on test set. With the original emotion dataset this is not the case (92.4% acc)**
flaviagiammarino/vqa-rad
2023-06-03T18:38:48.000Z
[ "task_categories:visual-question-answering", "size_categories:1K<n<10K", "language:en", "license:cc0-1.0", "medical", "region:us" ]
flaviagiammarino
null
null
null
5
600
--- license: cc0-1.0 task_categories: - visual-question-answering language: - en paperswithcode_id: vqa-rad tags: - medical pretty_name: VQA-RAD size_categories: - 1K<n<10K dataset_info: features: - name: image dtype: image - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 95883938.139 num_examples: 1793 - name: test num_bytes: 23818877.0 num_examples: 451 download_size: 34496718 dataset_size: 119702815.139 --- # Dataset Card for VQA-RAD ## Dataset Description VQA-RAD is a dataset of question-answer pairs on radiology images. The dataset is intended to be used for training and testing Medical Visual Question Answering (VQA) systems. The dataset includes both open-ended questions and binary "yes/no" questions. The dataset is built from [MedPix](https://medpix.nlm.nih.gov/), which is a free open-access online database of medical images. The question-answer pairs were manually generated by a team of clinicians. **Homepage:** [Open Science Framework Homepage](https://osf.io/89kps/)<br> **Paper:** [A dataset of clinically generated visual questions and answers about radiology images](https://www.nature.com/articles/sdata2018251)<br> **Leaderboard:** [Papers with Code Leaderboard](https://paperswithcode.com/sota/medical-visual-question-answering-on-vqa-rad) ### Dataset Summary The dataset was downloaded from the [Open Science Framework Homepage](https://osf.io/89kps/) on June 3, 2023. The dataset contains 2,248 question-answer pairs and 315 images. Out of the 315 images, 314 images are referenced by a question-answer pair, while 1 image is not used. The training set contains 3 duplicate image-question-answer triplets. The training set also has 1 image-question-answer triplet in common with the test set. After dropping these 4 image-question-answer triplets from the training set, the dataset contains 2,244 question-answer pairs on 314 images. #### Supported Tasks and Leaderboards This dataset has an active leaderboard on [Papers with Code](https://paperswithcode.com/sota/medical-visual-question-answering-on-vqa-rad) where models are ranked based on three metrics: "Close-ended Accuracy", "Open-ended accuracy" and "Overall accuracy". "Close-ended Accuracy" is the accuracy of a model's generated answers for the subset of binary "yes/no" questions. "Open-ended accuracy" is the accuracy of a model's generated answers for the subset of open-ended questions. "Overall accuracy" is the accuracy of a model's generated answers across all questions. #### Languages The question-answer pairs are in English. ## Dataset Structure ### Data Instances Each instance consists of an image-question-answer triplet. ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=566x555>, 'question': 'are regions of the brain infarcted?', 'answer': 'yes' } ``` ### Data Fields - `'image'`: the image referenced by the question-answer pair. - `'question'`: the question about the image. - `'answer'`: the expected answer. ### Data Splits The dataset is split into training and test. The split is provided directly by the authors. | | Training Set | Test Set | |-------------------------|:------------:|:---------:| | QAs |1,793 |451 | | Images |313 |203 | ## Additional Information ### Licensing Information The authors have released the dataset under the CC0 1.0 Universal License. ### Citation Information ``` @article{lau2018dataset, title={A dataset of clinically generated visual questions and answers about radiology images}, author={Lau, Jason J and Gayen, Soumya and Ben Abacha, Asma and Demner-Fushman, Dina}, journal={Scientific data}, volume={5}, number={1}, pages={1--10}, year={2018}, publisher={Nature Publishing Group} } ```
SetFit/CR
2022-06-21T09:04:33.000Z
[ "region:us" ]
SetFit
null
null
null
0
598
# Customer Reviews This dataset is a port of the official [`CR` dataset](https://github.com/hiyouga/Dual-Contrastive-Learning/tree/main/data) from [this paper](https://www.cs.uic.edu/~liub/FBS/opinion-mining-final-WSDM.pdf). There is no validation split.
result-kand2-sdxl-wuerst-karlo/df2d5286
2023-09-20T21:15:39.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
598
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 215 num_examples: 10 download_size: 1374 dataset_size: 215 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "df2d5286" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/9cc99eaf
2023-09-20T21:15:42.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
597
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 215 num_examples: 10 download_size: 1374 dataset_size: 215 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "9cc99eaf" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
LeoCordoba/CC-NEWS-ES
2023-02-23T21:53:55.000Z
[ "task_categories:summarization", "task_categories:text-generation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "source_datasets:cc-news", "language:es", "license:mit", "conditional-text-generation", "region:us" ]
LeoCordoba
null
null
6
596
--- annotations_creators: - no-annotation language_creators: - found language: - es license: - mit multilinguality: - monolingual size_categories: - n<1K - 1K<n<10K - 10K<n<100K - 100K<n<1M - 1M<n<10M source_datasets: - cc-news task_categories: - summarization - text-generation task_ids: [] tags: - conditional-text-generation --- # Dataset Card for CC-NEWS-ES ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [CC-NEWS-ES dataset repository](https://huggingface.co/datasets/LeoCordoba/CC-NEWS-ES) - **Paper:** - **Leaderboard:** - **Point of Contact:** [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/) ### Dataset Summary CC-NEWS-ES is a Spanish-language dataset of news. The corpus was generated by extracting the Spanish articles from CC-NEWS (news index of Common Crawl) of 2019. For doing that FastText model was used for language prediction. It contains a total of 7,473,286 texts and 1,812,009,283 words distributed as follows: |domain | texts | words | |:----|-----------------:|-----------------:| | ar | 532703 | 1.45127e+08 | | bo | 29557 | 7.28996e+06 | | br | 107 | 14207 | | cl | 116661 | 3.34633e+07 | | co | 78662 | 1.92649e+07 | | com | 3650950 | 8.44094e+08 | | cr | 16542 | 3.82075e+06 | | es |1838790 | 4.82943e+08 | | gt | 4833 | 838121 | | hn | 36559 | 5.49933e+06 | | mx | 724908 | 1.62198e+08 | | ni | 40643 | 1.08501e+07 | | pa | 18447 | 4.34724e+06 | | pe | 230962 | 3.52123e+07 | | pr | 7756 | 1.6633e+06 | | py | 30651 | 2.08077e+07 | | sv | 454 | 353145 | | uy | 80948 | 2.72562e+07 | | ve | 33148 | 6.96578e+06 | ### Supported Tasks and Leaderboards TODO - ### Languages The text is in Spanish. The BCP-47 code for Spanish is es. ## Dataset Structure ### Data Instances Each data instance contains the following features: ... - country: top level domain, usually refers to a country (except in the case of .com). - text: body of the news - id: internal id An example from CC-NEWS-ES looks like the following: ``` {'country': 'py', 'text': '“La que asumió es una mujer que está en línea de sucesión. La policía, ni los militares están en el Palacio, lo que ella dijo fue que no se podía seguir reprimiendo al pueblo", manifestó este jueves el senador colorado, Enrique Riera, sobre la asunción presidencial en Bolivia de la senadora opositora, Jeanine Áñez,Riera agregó que Evo Morales el que "escapó y abandonó" a su pueblo al ir como asilado a México. En ese sentido, dijo que irónicamente, el expresidente boliviano no eligió como destino a Venezuela, Nicaragua ni a Cuba.Sostuvo que nos de debe utilizar a las instituciones democráticas y republicanas para llegar al poder, cambiando Constituciones y prorrogando mandatos una y otra vez. “El amigo Morales no respetó absolutamente nada”, subrayó.Por otra parte, el senador colorado mencionó que los fiscales y jueces bolivianos deberían tener el "coraje" de investigar el origen de la riqueza de Morales.Habló también sobre la situación en Venezuela y mencionó que Nicolás Maduro no cae, porque "toda la FFAA está contaminada de narcotráfico". El hombre cuenta con orden de prisión en su país por los ilícitos de Tráfico de Drogas y Asociación Criminal, según el Consejo Nacional de Justicia del Brasil.La agente fiscal Liliana Denice Duarte, titular de la Unidad Fiscal Nº 1 de Presidente Franco, requirió la expulsión del extranjero y la jueza Carina Frutos Recalde, mediante Auto Interlocutorio (A.I.) N° 2.153, dio curso favorable al pedido del Ministerio Público. Esto considerando la alta expectativa de pena que tiene el supuesto delincuente en su país.La detención ...', 'id': 7328086} Note: the text is shortened for simplicity. ``` ### Data Fields - ... - ... ### Data Splits ... ## Dataset Creation ### Curation Rationale [N/A] ### Source Data #### Initial Data Collection and Normalization TODO #### Who are the source language producers? Common Crawl: https://commoncrawl.org/ ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [N/A] ## Considerations for Using the Data ### Social Impact of Dataset ... ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators This dataset is maintained by [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/) and was built with the help of [María Gaska](https://www.linkedin.com/in/mfgaska/). ### Licensing Information [N/A] ### Citation Information TODO ### Contributions [N/A]
Multimodal-Fatima/FGVC_Aircraft_train
2023-05-04T05:30:31.000Z
[ "region:us" ]
Multimodal-Fatima
null
null
null
0
596
--- dataset_info: features: - name: image dtype: image - name: family dtype: class_label: names: '0': A300 '1': A310 '2': A320 '3': A330 '4': A340 '5': A380 '6': ATR-42 '7': ATR-72 '8': An-12 '9': BAE 146 '10': BAE-125 '11': Beechcraft 1900 '12': Boeing 707 '13': Boeing 717 '14': Boeing 727 '15': Boeing 737 '16': Boeing 747 '17': Boeing 757 '18': Boeing 767 '19': Boeing 777 '20': C-130 '21': C-47 '22': CRJ-200 '23': CRJ-700 '24': Cessna 172 '25': Cessna 208 '26': Cessna Citation '27': Challenger 600 '28': DC-10 '29': DC-3 '30': DC-6 '31': DC-8 '32': DC-9 '33': DH-82 '34': DHC-1 '35': DHC-6 '36': DR-400 '37': Dash 8 '38': Dornier 328 '39': EMB-120 '40': Embraer E-Jet '41': Embraer ERJ 145 '42': Embraer Legacy 600 '43': Eurofighter Typhoon '44': F-16 '45': F/A-18 '46': Falcon 2000 '47': Falcon 900 '48': Fokker 100 '49': Fokker 50 '50': Fokker 70 '51': Global Express '52': Gulfstream '53': Hawk T1 '54': Il-76 '55': King Air '56': L-1011 '57': MD-11 '58': MD-80 '59': MD-90 '60': Metroliner '61': PA-28 '62': SR-20 '63': Saab 2000 '64': Saab 340 '65': Spitfire '66': Tornado '67': Tu-134 '68': Tu-154 '69': Yak-42 - name: manufacturer dtype: class_label: names: '0': ATR '1': Airbus '2': Antonov '3': Beechcraft '4': Boeing '5': Bombardier Aerospace '6': British Aerospace '7': Canadair '8': Cessna '9': Cirrus Aircraft '10': Dassault Aviation '11': Dornier '12': Douglas Aircraft Company '13': Embraer '14': Eurofighter '15': Fairchild '16': Fokker '17': Gulfstream Aerospace '18': Ilyushin '19': Lockheed Corporation '20': Lockheed Martin '21': McDonnell Douglas '22': Panavia '23': Piper '24': Robin '25': Saab '26': Supermarine '27': Tupolev '28': Yakovlev '29': de Havilland - name: label dtype: class_label: names: '0': 707-320 '1': 727-200 '2': 737-200 '3': 737-300 '4': 737-400 '5': 737-500 '6': 737-600 '7': 737-700 '8': 737-800 '9': 737-900 '10': 747-100 '11': 747-200 '12': 747-300 '13': 747-400 '14': 757-200 '15': 757-300 '16': 767-200 '17': 767-300 '18': 767-400 '19': 777-200 '20': 777-300 '21': A300B4 '22': A310 '23': A318 '24': A319 '25': A320 '26': A321 '27': A330-200 '28': A330-300 '29': A340-200 '30': A340-300 '31': A340-500 '32': A340-600 '33': A380 '34': ATR-42 '35': ATR-72 '36': An-12 '37': BAE 146-200 '38': BAE 146-300 '39': BAE-125 '40': Beechcraft 1900 '41': Boeing 717 '42': C-130 '43': C-47 '44': CRJ-200 '45': CRJ-700 '46': CRJ-900 '47': Cessna 172 '48': Cessna 208 '49': Cessna 525 '50': Cessna 560 '51': Challenger 600 '52': DC-10 '53': DC-3 '54': DC-6 '55': DC-8 '56': DC-9-30 '57': DH-82 '58': DHC-1 '59': DHC-6 '60': DHC-8-100 '61': DHC-8-300 '62': DR-400 '63': Dornier 328 '64': E-170 '65': E-190 '66': E-195 '67': EMB-120 '68': ERJ 135 '69': ERJ 145 '70': Embraer Legacy 600 '71': Eurofighter Typhoon '72': F-16A/B '73': F/A-18 '74': Falcon 2000 '75': Falcon 900 '76': Fokker 100 '77': Fokker 50 '78': Fokker 70 '79': Global Express '80': Gulfstream IV '81': Gulfstream V '82': Hawk T1 '83': Il-76 '84': L-1011 '85': MD-11 '86': MD-80 '87': MD-87 '88': MD-90 '89': Metroliner '90': Model B200 '91': PA-28 '92': SR-20 '93': Saab 2000 '94': Saab 340 '95': Spitfire '96': Tornado '97': Tu-134 '98': Tu-154 '99': Yak-42 - name: id dtype: int64 - name: clip_tags_ViT_L_14 sequence: string - name: LLM_Description_gpt3_downstream_tasks_ViT_L_14 sequence: string - name: blip_caption dtype: string - name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14 sequence: string - name: Attributes_ViT_L_14_text_davinci_003_full sequence: string - name: Attributes_ViT_L_14_text_davinci_003_fgvc sequence: string - name: clip_tags_ViT_L_14_with_openai_classes sequence: string - name: clip_tags_ViT_L_14_wo_openai_classes sequence: string - name: clip_tags_ViT_L_14_simple_specific dtype: string - name: clip_tags_ViT_L_14_ensemble_specific dtype: string - name: clip_tags_ViT_B_16_simple_specific dtype: string - name: clip_tags_ViT_B_16_ensemble_specific dtype: string - name: clip_tags_ViT_B_32_simple_specific dtype: string - name: clip_tags_ViT_B_32_ensemble_specific dtype: string - name: Attributes_ViT_B_16_descriptors_text_davinci_003_full sequence: string - name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full sequence: string - name: clip_tags_LAION_ViT_H_14_2B_simple_specific dtype: string - name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific dtype: string splits: - name: train num_bytes: 931613762.0 num_examples: 3334 download_size: 925638163 dataset_size: 931613762.0 --- # Dataset Card for "FGVC_Aircraft_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nampdn-ai/tiny-codes
2023-09-30T04:14:36.000Z
[ "task_categories:text-generation", "size_categories:1M<n<10M", "language:en", "license:mit", "arxiv:2306.11644", "arxiv:2305.07759", "doi:10.57967/hf/0937", "region:us" ]
nampdn-ai
null
null
null
125
596
--- license: mit task_categories: - text-generation language: - en pretty_name: Tiny Codes size_categories: - 1M<n<10M --- # Reasoning with Language and Code This synthetic dataset is a collection of **1.6 millions short and clear code snippets** that can help LLM models learn how to reason with both natural and programming languages. The dataset covers a wide range of programming languages, such as Python, TypeScript, JavaScript, Ruby, Julia, Rust, C++, Bash, Java, C#, and Go. It also includes two database languages: Cypher (for graph databases) and SQL (for relational databases) in order to study the relationship of entities. The main goal of this repository is to highlight the importance of **textbook (high education value)** using **code snippets**. All code snippets are carefully written and commented to ensure maximum readability and understandability. Moreover, the use of **if/else control flow** is emphasized to foster the development of effective reasoning skills in LLM models. This repository is inspired by the paper [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) and [The Magic of IF](https://aclanthology.org/2023.findings-acl.574.pdf), which shows that LLM models can achieve state-of-the-art results on code-related tasks by training on high-quality data that resembles textbooks and exercises. This repository aims to provide such data for data analysts and ML engineers who want to enhance their knowledge of how LLM models can learn to reason with code. Anyone who wants to reproduce this dataset can use these prompts with other LLM models and compare their results, or you can forge a new prompt from related properties. *Please note that this dataset is not intended for code-generation purposes, it's intended to boost the reasoning capability of model via logic code.* I hope you find this dataset useful and informative! ## Tiny Series Explore the possibilities and limitations of building Small Language Models with these tiny gems of data! - [TinyStories](https://arxiv.org/abs/2305.07759): The paper that sparked my interest in the journey of the tiny-* series. - [tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks): 420k "things of internet" synthetic textbooks. - [tiny-orca-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-orca-textbooks): Synthetic textbook to help model learn in-context on how it should perform task the right way. - [tiny-webtext](https://huggingface.co/datasets/nampdn-ai/tiny-webtext): A 6GB (4.5M records) variety of diverse webtext enriched with critical thinking methods to make unbiased English dataset. - [tiny-lessons](https://huggingface.co/datasets/nampdn-ai/tiny-lessons): Subset of [tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks) dataset, various lessons about "things of internet" augmented in a bite-sized textbook Markdown format. - [tiny-bridgedict](https://huggingface.co/datasets/nampdn-ai/tiny-bridgedict): A dataset that links and transfers knowledge between English, Vietnamese, Chinese in a tiny multilingual models. ### Others small HQ datasets with textbook-like quality - [devdocs.io](https://huggingface.co/datasets/nampdn-ai/devdocs.io): FreeCodeCamp has provided 189k comprehensive API documentation across a wide range of tech stacks and programming languages. - [sciphi-python-textbook](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-python-textbook) - [textbook_quality_programming](https://huggingface.co/datasets/vikp/textbook_quality_programming) - [sciphi-textbooks-are-all-you-need](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need)
mteb/stackexchange-clustering
2022-09-27T19:11:56.000Z
[ "language:en", "region:us" ]
mteb
null
null
null
0
595
--- language: - en ---
McGill-NLP/TopiOCQA
2023-09-29T19:37:48.000Z
[ "task_categories:text-retrieval", "task_categories:text-generation", "task_ids:language-modeling", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100k", "language:en", "license:cc-by-nc-sa-4.0", "conversational-question-answering", "arxiv:2110.00768", "region:us" ]
McGill-NLP
TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena.
null
null
4
594
--- annotations_creators: - crowdsourced language: - en license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100k task_categories: - text-retrieval - text-generation task_ids: - language-modeling - open-domain-qa pretty_name: Open-domain Conversational Question Answering with Topic Switching tags: - conversational-question-answering --- # Dataset Card for TopiOCQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [TopiOCQA homepage](https://mcgill-nlp.github.io/topiocqa/) - **Repository:** [TopiOCQA Github](https://github.com/McGill-NLP/topiocqa) - **Paper:** [Open-domain Conversational Question Answering with Topic Switching](https://arxiv.org/abs/2110.00768) - **Point of Contact:** [Vaibhav Adlakha](mailto:vaibhav.adlakha@mila.quebec) ### Dataset Summary TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena. ### Languages The language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en. ## Additional Information ### Licensing Information TopiOCQA is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information ``` @inproceedings{adlakha2022topiocqa, title={Topi{OCQA}: Open-domain Conversational Question Answering with Topic Switching}, author={Adlakha, Vaibhav and Dhuliawala, Shehzaad and Suleman, Kaheer and de Vries, Harm and Reddy, Siva}, journal={Transactions of the Association for Computational Linguistics}, volume = {10}, pages = {468-483}, year = {2022}, month = {04}, year={2022}, issn = {2307-387X}, doi = {10.1162/tacl_a_00471}, url = {https://doi.org/10.1162/tacl\_a\_00471}, eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00471/2008126/tacl\_a\_00471.pdf}, } ```
result-kand2-sdxl-wuerst-karlo/6845e847
2023-09-20T22:33:18.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
594
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 211 num_examples: 10 download_size: 1393 dataset_size: 211 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "6845e847" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/SNLI-VE_train
2023-02-07T23:21:35.000Z
[ "region:us" ]
Multimodal-Fatima
null
null
null
1
593
--- dataset_info: features: - name: image dtype: image - name: filename dtype: string - name: premise dtype: string - name: hypothesis dtype: string - name: label dtype: class_label: names: '0': entailment '1': neutral '2': contradiction - name: id dtype: int64 - name: id_image dtype: int64 - name: clip_tags_ViT_L_14 sequence: string - name: blip_caption dtype: string - name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14 sequence: string splits: - name: train num_bytes: 73634118251.385 num_examples: 529527 download_size: 27853612384 dataset_size: 73634118251.385 --- # Dataset Card for "SNLI-VE_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
eduagarcia/cnj_benchmarks
2023-07-17T03:23:48.000Z
[ "region:us" ]
eduagarcia
null
null
null
0
593
Entry not found
orgcatorg/multilingual
2023-10-03T13:32:28.000Z
[ "region:us" ]
orgcatorg
null
null
null
0
593
--- dataset_info: - config_name: eng_Latn-lao_Laoo features: - name: translation struct: - name: eng_Latn dtype: string - name: lao_Laoo dtype: string splits: - name: train num_bytes: 42871606 num_examples: 140265 download_size: 23468883 dataset_size: 42871606 - config_name: eng_Latn-mya_Mymr features: - name: translation struct: - name: eng_Latn dtype: string - name: mya_Mymr dtype: string splits: - name: train num_bytes: 70235556 num_examples: 248767 download_size: 34667809 dataset_size: 70235556 - config_name: eng_Latn-tgl_Latn features: - name: translation struct: - name: eng_Latn dtype: string - name: tgl_Latn dtype: string splits: - name: train num_bytes: 759841423 num_examples: 3604573 download_size: 531326540 dataset_size: 759841423 configs: - config_name: eng_Latn-lao_Laoo data_files: - split: train path: eng_Latn-lao_Laoo/train-* - config_name: eng_Latn-mya_Mymr data_files: - split: train path: eng_Latn-mya_Mymr/train-* - config_name: eng_Latn-tgl_Latn data_files: - split: train path: eng_Latn-tgl_Latn/train-* --- # Dataset Card for "multilingual" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
JayalekshmiGopakumar/DocLayexp1
2023-08-30T13:25:15.000Z
[ "region:us" ]
JayalekshmiGopakumar
null
null
null
0
591
--- configs: - config_name: default data_files: - split: test path: data/test-* - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': financial_reports '1': government_tenders '2': manuals '3': laws_and_regulations '4': scientific_articles '5': patents - name: ground_truth dtype: string splits: - name: test num_bytes: 3240643.0 num_examples: 12 - name: train num_bytes: 16492390.0 num_examples: 43 - name: validation num_bytes: 1929905.3125 num_examples: 5 download_size: 21721061 dataset_size: 21662938.3125 --- # Dataset Card for "DocLayexp1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Riksarkivet/test_images_demo
2023-08-31T13:58:13.000Z
[ "task_categories:image-to-text", "language:sv", "HTR", "region:us" ]
Riksarkivet
Demo dataset for the htr demo.
@InProceedings{huggingface:dataset, title = {Small htr examples images}, author={Gabriel Borg}, year={2023} }
null
1
590
--- language: - sv tags: - HTR task_categories: - image-to-text --- # Information This is a demo dataset contains images from the Swedish National Archives, Riksarkivet. To find the images at Riksarkivet: 30002030_00003.jpg = https://sok.riksarkivet.se/bildvisning/30002030_00003 | Image_name | Description | |---|---| | R0001213_00003 | Kommissorialrätt i Bohus län ang trolldomsväsendet, 1671 | | A0065848_00037 | Regementsvis ordnade handlingar 1685 | | 40004028_00007 | Bergskollegium, Relationer och skrivelser anggående utländska bergverk, 1698 | | 40005343_00071 | Göta hovrätt, Brottsmålsprotokoll, 1717 | | A0060200_00003 | Trolldom och annan vidskepelse, Rättegångshandlingar samt skrivelser till Göta Hovrätt, 1720 | | A0068662_00092 | Svea hovrätt, protokoll, 1729 | | A0068702_00065 | Svea hovrätt, protokoll, 1750 | | 40004051_00009 | Bergskollegium, Relationer och skrivelser angående utländska bergverk, 1784 | | U0000236_00609 | Hammartingsprotokoll, 1803 | | R0000277_00005 | Beskrivning över provinsen Gästrikland, 1861 | | 30003038_00003 | Göteborgs poliskammare, 1865 | | 30002030_00003 | Göteborgs poliskammare, 1877 | | 30002039_00003 | Göteborgs poliskammare, 1886 | | ... | ... |
mteb/summeval
2022-09-27T19:14:10.000Z
[ "language:en", "region:us" ]
mteb
null
null
null
1
589
--- language: - en --- # SummEval The annotations include summaries generated by 16 models from 100 source news articles (1600 examples in total). Each of the summaries was annotated by 5 indepedent crowdsource workers and 3 independent experts (8 annotations in total). Summaries were evaluated across 4 dimensions: coherence, consistency, fluency, relevance. Each source news article comes with the original reference from the CNN/DailyMail dataset and 10 additional crowdsources reference summaries. For this dataset, we averaged the 3 **expert** annotations to get the human scores. source: https://github.com/Yale-LILY/SummEval
eraser_multi_rc
2023-04-05T10:05:21.000Z
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "region:us" ]
null
Eraser Multi RC is a dataset for queries over multi-line passages, along with answers and a rationalte. Each example in this dataset has the following 5 parts 1. A Mutli-line Passage 2. A Query about the passage 3. An Answer to the query 4. A Classification as to whether the answer is right or wrong 5. An Explanation justifying the classification
@unpublished{eraser2019, title = {ERASER: A Benchmark to Evaluate Rationalized NLP Models}, author = {Jay DeYoung and Sarthak Jain and Nazneen Fatema Rajani and Eric Lehman and Caiming Xiong and Richard Socher and Byron C. Wallace} } @inproceedings{MultiRC2018, author = {Daniel Khashabi and Snigdha Chaturvedi and Michael Roth and Shyam Upadhyay and Dan Roth}, title = {Looking Beyond the Surface:A Challenge Set for Reading Comprehension over Multiple Sentences}, booktitle = {NAACL}, year = {2018} }
null
3
588
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - multiple-choice task_ids: - multiple-choice-qa pretty_name: Eraser MultiRC (Multi-Sentence Reading Comprehension) dataset_info: features: - name: passage dtype: string - name: query_and_answer dtype: string - name: label dtype: class_label: names: '0': 'False' '1': 'True' - name: evidences sequence: string splits: - name: test num_bytes: 9194475 num_examples: 4848 - name: train num_bytes: 47922877 num_examples: 24029 - name: validation num_bytes: 6529020 num_examples: 3214 download_size: 1667550 dataset_size: 63646372 --- # Dataset Card for "eraser_multi_rc" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://cogcomp.org/multirc/ - **Repository:** https://github.com/CogComp/multirc - **Paper:** [Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences](https://cogcomp.seas.upenn.edu/page/publication_view/833) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.67 MB - **Size of the generated dataset:** 63.65 MB - **Total amount of disk used:** 65.32 MB ### Dataset Summary MultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph. We have designed the dataset with three key challenges in mind: - The number of correct answer-options for each question is not pre-specified. This removes the over-reliance of current approaches on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, unlike previous work, the task here is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually. - The correct answer(s) is not required to be a span in the text. - The paragraphs in our dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets. The goal of this dataset is to encourage the research community to explore approaches that can do more than sophisticated lexical-level matching. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.67 MB - **Size of the generated dataset:** 63.65 MB - **Total amount of disk used:** 65.32 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "evidences": "[\"Allan sat down at his desk and pulled the chair in close .\", \"Opening a side drawer , he took out a piece of paper and his ink...", "label": 0, "passage": "\"Allan sat down at his desk and pulled the chair in close .\\nOpening a side drawer , he took out a piece of paper and his inkpot...", "query_and_answer": "Name few objects said to be in or on Allan 's desk || Eraser" } ``` ### Data Fields The data fields are the same among all splits. #### default - `passage`: a `string` feature. - `query_and_answer`: a `string` feature. - `label`: a classification label, with possible values including `False` (0), `True` (1). - `evidences`: a `list` of `string` features. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|24029| 3214|4848| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information https://github.com/CogComp/multirc/blob/master/LICENSE Research and Academic Use License Cognitive Computation Group University of Illinois at Urbana-Champaign Downloading software implies that you accept the following license terms: Under this Agreement, The Board of Trustees of the University of Illinois ("University"), a body corporate and politic of the State of Illinois with its principal offices at 506 South Wright Street, Urbana, Illinois 61801, U.S.A., on behalf of its Department of Computer Science on the Urbana-Champaign Campus, provides the software ("Software") described in Appendix A, attached hereto and incorporated herein, to the Licensee identified below ("Licensee") subject to the following conditions: 1. Upon execution of this Agreement by Licensee below, the University grants, and Licensee accepts, a roylaty-free, non-exclusive license: A. To use unlimited copies of the Software for its own academic and research purposes. B. To make derivative works. However, if Licensee distributes any derivative work based on or derived from the Software (with such distribution limited to binary form only), then Licensee will (1) notify the University (c/o Professor Dan Roth, e-mail: danr@cs.uiuc.edu) regarding its distribution of the derivative work and provide a copy if requested, and (2) clearly notify users that such derivative work is a modified version and not the original Software distributed by the University. C. To redistribute (sublicense) derivative works based on the Software in binary form only to third parties provided that (1) the copyright notice and any accompanying legends or proprietary notices are reproduced on all copies, (2) no royalty is charged for such copies, and (3) third parties are restricted to using the derivative work for academic and research purposes only, without further sublicensing rights. No license is granted herein that would permit Licensee to incorporate the Software into a commercial product, or to otherwise commercially exploit the Software. Should Licensee wish to make commercial use of the Software, Licensee should contact the University, c/o the Office of Technology Management ("OTM") to negotiate an appropriate license for such commercial use. To contact the OTM: otmmailaccount@ad.uiuc.edu; telephone: (217)333-3781; fax: (217) 265-5530. 2. THE UNIVERSITY GIVES NO WARRANTIES, EITHER EXPRESSED OR IMPLIED, FOR THE SOFTWARE AND/OR ASSOCIATED MATERIALS PROVIDED UNDER THIS AGREEMENT, INCLUDING, WITHOUT LIMITATION, WARRANTY OF MERCHANTABILITY AND WARRANTY OF FITNESS FOR A PARTICULAR PURPOSE, AND ANY WARRANTY AGAINST INFRINGEMENT OF ANY INTELLECTUAL PROPERTY RIGHTS. 3. Licensee understands the Software is a research tool for which no warranties as to capabilities or accuracy are made, and Licensee accepts the Software on an "as is, with all defects" basis, without maintenance, debugging , support or improvement. Licensee assumes the entire risk as to the results and performance of the Software and/or associated materials. Licensee agrees that University shall not be held liable for any direct, indirect, consequential, or incidental damages with respect to any claim by Licensee or any third party on account of or arising from this Agreement or use of the Software and/or associated materials. 4. Licensee understands the Software is proprietary to the University. Licensee will take all reasonable steps to insure that the source code is protected and secured from unauthorized disclosure, use, or release and will treat it with at least the same level of care as Licensee would use to protect and secure its own proprietary computer programs and/or information, but using no less than reasonable care. 5. In the event that Licensee shall be in default in the performance of any material obligations under this Agreement, and if the default has not been remedied within sixty (60) days after the date of notice in writing of such default, University may terminate this Agreement by written notice. In the event of termination, Licensee shall promptly return to University the original and any copies of licensed Software in Licensee's possession. In the event of any termination of this Agreement, any and all sublicenses granted by Licensee to third parties pursuant to this Agreement (as permitted by this Agreement) prior to the date of such termination shall nevertheless remain in full force and effect. 6. The Software was developed, in part, with support from the National Science Foundation, and the Federal Government has certain license rights in the Software. 7. This Agreement shall be construed and interpreted in accordance with the laws of the State of Illinois, U.S.A.. 8. This Agreement shall be subject to all United States Government laws and regulations now and hereafter applicable to the subject matter of this Agreement, including specifically the Export Law provisions of the Departments of Commerce and State. Licensee will not export or re-export the Software without the appropriate United States or foreign government license. By its registration below, Licensee confirms that it understands the terms and conditions of this Agreement, and agrees to be bound by them. This Agreement shall become effective as of the date of execution by Licensee. ### Citation Information ``` @unpublished{eraser2019, title = {ERASER: A Benchmark to Evaluate Rationalized NLP Models}, author = {Jay DeYoung and Sarthak Jain and Nazneen Fatema Rajani and Eric Lehman and Caiming Xiong and Richard Socher and Byron C. Wallace} } @inproceedings{MultiRC2018, author = {Daniel Khashabi and Snigdha Chaturvedi and Michael Roth and Shyam Upadhyay and Dan Roth}, title = {Looking Beyond the Surface:A Challenge Set for Reading Comprehension over Multiple Sentences}, booktitle = {Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL)}, year = {2018} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
snips_built_in_intents
2023-01-25T14:44:32.000Z
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc0-1.0", "arxiv:1805.10190", "region:us" ]
null
Snips' built in intents dataset was initially used to compare different voice assistants and released as a public dataset hosted at https://github.com/sonos/nlu-benchmark 2016-12-built-in-intents. The dataset contains 328 utterances over 10 intent classes. The related paper mentioned on the github page is https://arxiv.org/abs/1805.10190 and a related Medium post is https://medium.com/snips-ai/benchmarking-natural-language-understanding-systems-d35be6ce568d .
@article{DBLP:journals/corr/abs-1805-10190, author = {Alice Coucke and Alaa Saade and Adrien Ball and Th{\'{e}}odore Bluche and Alexandre Caulier and David Leroy and Cl{\'{e}}ment Doumouro and Thibault Gisselbrecht and Francesco Caltagirone and Thibaut Lavril and Ma{\"{e}}l Primet and Joseph Dureau}, title = {Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces}, journal = {CoRR}, volume = {abs/1805.10190}, year = {2018}, url = {http://arxiv.org/abs/1805.10190}, archivePrefix = {arXiv}, eprint = {1805.10190}, timestamp = {Mon, 13 Aug 2018 16:46:59 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1805-10190.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
null
4
587
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc0-1.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - intent-classification paperswithcode_id: snips pretty_name: SNIPS Natural Language Understanding benchmark dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': ComparePlaces '1': RequestRide '2': GetWeather '3': SearchPlace '4': GetPlaceDetails '5': ShareCurrentLocation '6': GetTrafficInformation '7': BookRestaurant '8': GetDirections '9': ShareETA splits: - name: train num_bytes: 19431 num_examples: 328 download_size: 9130264 dataset_size: 19431 train-eval-index: - config: default task: text-classification task_id: multi_class_classification train_split: train col_mapping: text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for Snips Built In Intents ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/sonos/nlu-benchmark/tree/master/2016-12-built-in-intents - **Repository:** https://github.com/sonos/nlu-benchmark/tree/master/2016-12-built-in-intents - **Paper:** https://arxiv.org/abs/1805.10190 - **Point of Contact:** The Snips team has joined Sonos in November 2019. These open datasets remain available and their access is now managed by the Sonos Voice Experience Team. Please email sve-research@sonos.com with any question. ### Dataset Summary Snips' built in intents dataset was initially used to compare different voice assistants and released as a public dataset hosted at https://github.com/sonos/nlu-benchmark in folder 2016-12-built-in-intents. The dataset contains 328 utterances over 10 intent classes. A related Medium post is https://medium.com/snips-ai/benchmarking-natural-language-understanding-systems-d35be6ce568d. ### Supported Tasks and Leaderboards There are no related shared tasks that we are aware of. ### Languages English ## Dataset Structure ### Data Instances The dataset contains 328 utterances over 10 intent classes. Each sample looks like: `{'label': 8, 'text': 'Transit directions to Barcelona Pizza.'}` ### Data Fields - `text`: The text utterance expressing some user intent. - `label`: The intent label of the piece of text utterance. ### Data Splits The source data is not split. ## Dataset Creation ### Curation Rationale The dataset was originally created to compare the performance of a number of voice assistants. However, the labelled utterances are useful for developing and benchmarking text chatbots as well. ### Source Data #### Initial Data Collection and Normalization It is not clear how the data was collected. From the Medium post: `The benchmark relies on a set of 328 queries built by the business team at Snips, and kept secret from data scientists and engineers throughout the development of the solution.` #### Who are the source language producers? Originally prepared by snips.ai. The Snips team has since joined Sonos in November 2019. These open datasets remain available and their access is now managed by the Sonos Voice Experience Team. Please email sve-research@sonos.com with any question. ### Annotations #### Annotation process It is not clear how the data was collected. From the Medium post: `The benchmark relies on a set of 328 queries built by the business team at Snips, and kept secret from data scientists and engineers throughout the development of the solution.` #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Originally prepared by snips.ai. The Snips team has since joined Sonos in November 2019. These open datasets remain available and their access is now managed by the Sonos Voice Experience Team. Please email sve-research@sonos.com with any question. ### Licensing Information The source data is licensed under Creative Commons Zero v1.0 Universal. ### Citation Information Any publication based on these datasets must include a full citation to the following paper in which the results were published by the Snips Team: Coucke A. et al., "Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces." CoRR 2018, https://arxiv.org/abs/1805.10190 ### Contributions Thanks to [@bduvenhage](https://github.com/bduvenhage) for adding this dataset.
lmsys/lmsys-chat-1m
2023-10-04T17:40:32.000Z
[ "task_categories:conversational", "size_categories:1M<n<10M", "arxiv:2309.11998", "region:us" ]
lmsys
null
null
null
220
587
--- size_categories: - 1M<n<10M task_categories: - conversational extra_gated_prompt: You agree to the [LMSYS-Chat-1M Dataset License Agreement](https://huggingface.co/datasets/lmsys/lmsys-chat-1m#lmsys-chat-1m-dataset-license-agreement). extra_gated_fields: Name: text Email: text Affiliation: text Country: text extra_gated_button_content: I agree to the terms and conditions of the LMSYS-Chat-1M Dataset License Agreement. configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: conversation_id dtype: string - name: model dtype: string - name: conversation list: - name: content dtype: string - name: role dtype: string - name: turn dtype: int64 - name: language dtype: string - name: openai_moderation list: - name: categories struct: - name: harassment dtype: bool - name: harassment/threatening dtype: bool - name: hate dtype: bool - name: hate/threatening dtype: bool - name: self-harm dtype: bool - name: self-harm/instructions dtype: bool - name: self-harm/intent dtype: bool - name: sexual dtype: bool - name: sexual/minors dtype: bool - name: violence dtype: bool - name: violence/graphic dtype: bool - name: category_scores struct: - name: harassment dtype: float64 - name: harassment/threatening dtype: float64 - name: hate dtype: float64 - name: hate/threatening dtype: float64 - name: self-harm dtype: float64 - name: self-harm/instructions dtype: float64 - name: self-harm/intent dtype: float64 - name: sexual dtype: float64 - name: sexual/minors dtype: float64 - name: violence dtype: float64 - name: violence/graphic dtype: float64 - name: flagged dtype: bool - name: redacted dtype: bool splits: - name: train num_bytes: 2626438904 num_examples: 1000000 download_size: 1488850250 dataset_size: 2626438904 --- ## LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset This dataset contains one million real-world conversations with 25 state-of-the-art LLMs. It is collected from 210K unique IP addresses in the wild on the [Vicuna demo and Chatbot Arena website](https://chat.lmsys.org/) from April to August 2023. Each sample includes a conversation ID, model name, conversation text in OpenAI API JSON format, detected language tag, and OpenAI moderation API tag. User consent is obtained through the "Terms of use" section on the data collection website. To ensure the safe release of data, we have made our best efforts to remove all conversations that contain personally identifiable information (PII). In addition, we have included the OpenAI moderation API output for each message. However, we have chosen to keep unsafe conversations so that researchers can study the safety-related questions associated with LLM usage in real-world scenarios as well as the OpenAI moderation process. For more details, please refer to the paper: https://arxiv.org/abs/2309.11998 **Basic Statistics** | Key | Value | | --- | --- | | # Conversations | 1,000,000 | | # Models | 25 | | # Users | 210,479 | | # Languages | 154 | | Avg. # Turns per Sample | 2.0 | | Avg. # Tokens per Prompt | 69.5 | | Avg. # Tokens per Response | 214.5 | **PII Redaction** We partnered with the [OpaquePrompts](https://opaqueprompts.opaque.co/) team to redact person names in this dataset to protect user privacy. Names like "Mary" and "James" in a conversation will appear as "NAME_1" and "NAME_2". For example: ```json Raw: [ { "content": "Write me a bio. My Name is Mary I am a student who is currently a beginner free lancer. I worked with James in the past ..." }] Redacted: [ { "content": "Write me a bio. My Name is NAME_1 I am a student who is currently a beginner free lancer. I worked with NAME_2 in the past ..." }] ``` Each conversation includes a "redacted" field to indicate if it has been redacted. This process may impact data quality and occasionally lead to incorrect redactions. We are working on improving the redaction quality and will release improved versions in the future. If you want to access the raw conversation data, please fill out [the form](https://docs.google.com/forms/d/1PZw67e19l0W3oCiQOjzSyZvXfOemhg6LCY0XzVmOUx0/edit) with details about your intended use cases. ## Uniqueness and Potential Usage This dataset features large-scale real-world conversations with LLMs. We believe it will help the AI research community answer important questions around topics like: - Characteristics and distributions of real-world user prompts - AI safety and content moderation - Training instruction-following models - Improving and evaluating LLM evaluation methods - Model selection and request dispatching algorithms For more details, please refer to the paper: https://arxiv.org/abs/2309.11998 ## LMSYS-Chat-1M Dataset License Agreement This Agreement contains the terms and conditions that govern your access and use of the LMSYS-Chat-1M Dataset (as defined above). You may not use the LMSYS-Chat-1M Dataset if you do not accept this Agreement. By clicking to accept, accessing the LMSYS-Chat-1M Dataset, or both, you hereby agree to the terms of the Agreement. If you are agreeing to be bound by the Agreement on behalf of your employer or another entity, you represent and warrant that you have full legal authority to bind your employer or such entity to this Agreement. If you do not have the requisite authority, you may not accept the Agreement or access the LMSYS-Chat-1M Dataset on behalf of your employer or another entity. - Safety and Moderation: **This dataset contains unsafe conversations that may be perceived as offensive or unsettling.** User should apply appropriate filters and safety measures before utilizing this dataset for training dialogue agents. - Non-Endorsement: The views and opinions depicted in this dataset **do not reflect** the perspectives of the researchers or affiliated institutions engaged in the data collection process. - Legal Compliance: You are mandated to use it in adherence with all pertinent laws and regulations. - Model Specific Terms: When leveraging direct outputs of a specific model, users must adhere to its corresponding terms of use. - Non-Identification: You **must not** attempt to identify the identities of individuals or infer any sensitive personal data encompassed in this dataset. - Prohibited Transfers: You should not distribute, copy, disclose, assign, sublicense, embed, host, or otherwise transfer the dataset to any third party. - Right to Request Deletion: At any time, we may require you to delete all copies of the conversation dataset (in whole or in part) in your possession and control. You will promptly comply with any and all such requests. Upon our request, you shall provide us with written confirmation of your compliance with such requirement. - Termination: We may, at any time, for any reason or for no reason, terminate this Agreement, effective immediately upon notice to you. Upon termination, the license granted to you hereunder will immediately terminate, and you will immediately stop using the LMSYS-Chat-1M Dataset and destroy all copies of the LMSYS-Chat-1M Dataset and related materials in your possession or control. - Limitation of Liability: IN NO EVENT WILL WE BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL, EXEMPLARY, PUNITIVE, SPECIAL, OR INDIRECT DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, BUSINESS INTERRUPTION, OR LOSS OF INFORMATION) ARISING OUT OF OR RELATING TO THIS AGREEMENT OR ITS SUBJECT MATTER, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Subject to your compliance with the terms and conditions of this Agreement, we grant to you, a limited, non-exclusive, non-transferable, non-sublicensable license to use the LMSYS-Chat-1M Dataset, including the conversation data and annotations, to research, develop, and improve software, algorithms, machine learning models, techniques, and technologies for both research and commercial purposes. ## Citation ``` @misc{zheng2023lmsyschat1m, title={LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset}, author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Tianle Li and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zhuohan Li and Zi Lin and Eric. P Xing and Joseph E. Gonzalez and Ion Stoica and Hao Zhang}, year={2023}, eprint={2309.11998}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
silicone
2023-06-01T14:59:53.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:dialogue-modeling", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:sentiment-classification", "task_ids:text-scoring", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "emotion-classification", "dialogue-act-classification", "arxiv:2009.11152", "region:us" ]
null
The Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE (SILICONE) benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems specifically designed for spoken language. All datasets are in the English language and cover a variety of domains including daily life, scripted scenarios, joint task completion, phone call conversations, and televsion dialogue. Some datasets additionally include emotion and/or sentimant labels.
@inproceedings{chapuis-etal-2020-hierarchical, title = "Hierarchical Pre-training for Sequence Labelling in Spoken Dialog", author = "Chapuis, Emile and Colombo, Pierre and Manica, Matteo and Labeau, Matthieu and Clavel, Chlo{\'e}", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.239", doi = "10.18653/v1/2020.findings-emnlp.239", pages = "2636--2648", abstract = "Sequence labelling tasks like Dialog Act and Emotion/Sentiment identification are a key component of spoken dialog systems. In this work, we propose a new approach to learn generic representations adapted to spoken dialog, which we evaluate on a new benchmark we call Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE benchmark (SILICONE). SILICONE is model-agnostic and contains 10 different datasets of various sizes. We obtain our representations with a hierarchical encoder based on transformer architectures, for which we extend two well-known pre-training objectives. Pre-training is performed on OpenSubtitles: a large corpus of spoken dialog containing over 2.3 billion of tokens. We demonstrate how hierarchical encoders achieve competitive results with consistently fewer parameters compared to state-of-the-art models and we show their importance for both pre-training and fine-tuning.", }
null
7
585
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M - 10K<n<100K - 1K<n<10K source_datasets: - original task_categories: - text-generation - fill-mask - text-classification task_ids: - dialogue-modeling - language-modeling - masked-language-modeling - sentiment-classification - text-scoring pretty_name: SILICONE Benchmark tags: - emotion-classification - dialogue-act-classification dataset_info: - config_name: dyda_da features: - name: Utterance dtype: string - name: Dialogue_Act dtype: string - name: Dialogue_ID dtype: string - name: Label dtype: class_label: names: '0': commissive '1': directive '2': inform '3': question - name: Idx dtype: int32 splits: - name: train num_bytes: 8346638 num_examples: 87170 - name: validation num_bytes: 764277 num_examples: 8069 - name: test num_bytes: 740226 num_examples: 7740 download_size: 8874925 dataset_size: 9851141 - config_name: dyda_e features: - name: Utterance dtype: string - name: Emotion dtype: string - name: Dialogue_ID dtype: string - name: Label dtype: class_label: names: '0': anger '1': disgust '2': fear '3': happiness '4': no emotion '5': sadness '6': surprise - name: Idx dtype: int32 splits: - name: train num_bytes: 8547111 num_examples: 87170 - name: validation num_bytes: 781445 num_examples: 8069 - name: test num_bytes: 757670 num_examples: 7740 download_size: 8874925 dataset_size: 10086226 - config_name: iemocap features: - name: Dialogue_ID dtype: string - name: Utterance_ID dtype: string - name: Utterance dtype: string - name: Emotion dtype: string - name: Label dtype: class_label: names: '0': ang '1': dis '2': exc '3': fea '4': fru '5': hap '6': neu '7': oth '8': sad '9': sur '10': xxx - name: Idx dtype: int32 splits: - name: train num_bytes: 908180 num_examples: 7213 - name: validation num_bytes: 100969 num_examples: 805 - name: test num_bytes: 254248 num_examples: 2021 download_size: 1158778 dataset_size: 1263397 - config_name: maptask features: - name: Speaker dtype: string - name: Utterance dtype: string - name: Dialogue_Act dtype: string - name: Label dtype: class_label: names: '0': acknowledge '1': align '2': check '3': clarify '4': explain '5': instruct '6': query_w '7': query_yn '8': ready '9': reply_n '10': reply_w '11': reply_y - name: Idx dtype: int32 splits: - name: train num_bytes: 1260413 num_examples: 20905 - name: validation num_bytes: 178184 num_examples: 2963 - name: test num_bytes: 171806 num_examples: 2894 download_size: 1048357 dataset_size: 1610403 - config_name: meld_e features: - name: Utterance dtype: string - name: Speaker dtype: string - name: Emotion dtype: string - name: Dialogue_ID dtype: string - name: Utterance_ID dtype: string - name: Label dtype: class_label: names: '0': anger '1': disgust '2': fear '3': joy '4': neutral '5': sadness '6': surprise - name: Idx dtype: int32 splits: - name: train num_bytes: 916337 num_examples: 9989 - name: validation num_bytes: 100234 num_examples: 1109 - name: test num_bytes: 242352 num_examples: 2610 download_size: 1553014 dataset_size: 1258923 - config_name: meld_s features: - name: Utterance dtype: string - name: Speaker dtype: string - name: Sentiment dtype: string - name: Dialogue_ID dtype: string - name: Utterance_ID dtype: string - name: Label dtype: class_label: names: '0': negative '1': neutral '2': positive - name: Idx dtype: int32 splits: - name: train num_bytes: 930405 num_examples: 9989 - name: validation num_bytes: 101801 num_examples: 1109 - name: test num_bytes: 245873 num_examples: 2610 download_size: 1553014 dataset_size: 1278079 - config_name: mrda features: - name: Utterance_ID dtype: string - name: Dialogue_Act dtype: string - name: Channel_ID dtype: string - name: Speaker dtype: string - name: Dialogue_ID dtype: string - name: Utterance dtype: string - name: Label dtype: class_label: names: '0': s '1': d '2': b '3': f '4': q - name: Idx dtype: int32 splits: - name: train num_bytes: 9998857 num_examples: 83943 - name: validation num_bytes: 1143286 num_examples: 9815 - name: test num_bytes: 1807462 num_examples: 15470 download_size: 10305848 dataset_size: 12949605 - config_name: oasis features: - name: Speaker dtype: string - name: Utterance dtype: string - name: Dialogue_Act dtype: string - name: Label dtype: class_label: names: '0': accept '1': ackn '2': answ '3': answElab '4': appreciate '5': backch '6': bye '7': complete '8': confirm '9': correct '10': direct '11': directElab '12': echo '13': exclaim '14': expressOpinion '15': expressPossibility '16': expressRegret '17': expressWish '18': greet '19': hold '20': identifySelf '21': inform '22': informCont '23': informDisc '24': informIntent '25': init '26': negate '27': offer '28': pardon '29': raiseIssue '30': refer '31': refuse '32': reqDirect '33': reqInfo '34': reqModal '35': selfTalk '36': suggest '37': thank '38': informIntent-hold '39': correctSelf '40': expressRegret-inform '41': thank-identifySelf - name: Idx dtype: int32 splits: - name: train num_bytes: 887018 num_examples: 12076 - name: validation num_bytes: 112185 num_examples: 1513 - name: test num_bytes: 119254 num_examples: 1478 download_size: 802002 dataset_size: 1118457 - config_name: sem features: - name: Utterance dtype: string - name: NbPairInSession dtype: string - name: Dialogue_ID dtype: string - name: SpeechTurn dtype: string - name: Speaker dtype: string - name: Sentiment dtype: string - name: Label dtype: class_label: names: '0': Negative '1': Neutral '2': Positive - name: Idx dtype: int32 splits: - name: train num_bytes: 496168 num_examples: 4264 - name: validation num_bytes: 57896 num_examples: 485 - name: test num_bytes: 100072 num_examples: 878 download_size: 513689 dataset_size: 654136 - config_name: swda features: - name: Utterance dtype: string - name: Dialogue_Act dtype: string - name: From_Caller dtype: string - name: To_Caller dtype: string - name: Topic dtype: string - name: Dialogue_ID dtype: string - name: Conv_ID dtype: string - name: Label dtype: class_label: names: '0': sd '1': b '2': sv '3': '%' '4': aa '5': ba '6': fc '7': qw '8': nn '9': bk '10': h '11': qy^d '12': bh '13': ^q '14': bf '15': fo_o_fw_"_by_bc '16': fo_o_fw_by_bc_" '17': na '18': ad '19': ^2 '20': b^m '21': qo '22': qh '23': ^h '24': ar '25': ng '26': br '27': 'no' '28': fp '29': qrr '30': arp_nd '31': t3 '32': oo_co_cc '33': aap_am '34': t1 '35': bd '36': ^g '37': qw^d '38': fa '39': ft '40': + '41': x '42': ny '43': sv_fx '44': qy_qr '45': ba_fe - name: Idx dtype: int32 splits: - name: train num_bytes: 20499788 num_examples: 190709 - name: validation num_bytes: 2265898 num_examples: 21203 - name: test num_bytes: 291471 num_examples: 2714 download_size: 16227500 dataset_size: 23057157 config_names: - dyda_da - dyda_e - iemocap - maptask - meld_e - meld_s - mrda - oasis - sem - swda --- # Dataset Card for SILICONE Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [N/A] - **Repository:** https://github.com/eusip/SILICONE-benchmark - **Paper:** https://arxiv.org/abs/2009.11152 - **Leaderboard:** [N/A] - **Point of Contact:** [Ebenge Usip](ebenge.usip@telecom-paris.fr) ### Dataset Summary The Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE (SILICONE) benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems specifically designed for spoken language. All datasets are in the English language and covers a variety of domains including daily life, scripted scenarios, joint task completion, phone call conversations, and televsion dialogue. Some datasets additionally include emotion and/or sentimant labels. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English. ## Dataset Structure ### Data Instances #### DailyDialog Act Corpus (Dialogue Act) For the `dyda_da` configuration one example from the dataset is: ``` { 'Utterance': "the taxi drivers are on strike again .", 'Dialogue_Act': 2, # "inform" 'Dialogue_ID': "2" } ``` #### DailyDialog Act Corpus (Emotion) For the `dyda_e` configuration one example from the dataset is: ``` { 'Utterance': "'oh , breaktime flies .'", 'Emotion': 5, # "sadness" 'Dialogue_ID': "997" } ``` #### Interactive Emotional Dyadic Motion Capture (IEMOCAP) database For the `iemocap` configuration one example from the dataset is: ``` { 'Dialogue_ID': "Ses04F_script03_2", 'Utterance_ID': "Ses04F_script03_2_F025", 'Utterance': "You're quite insufferable. I expect it's because you're drunk.", 'Emotion': 0, # "ang" } ``` #### HCRC MapTask Corpus For the `maptask` configuration one example from the dataset is: ``` { 'Speaker': "f", 'Utterance': "i think that would bring me over the crevasse", 'Dialogue_Act': 4, # "explain" } ``` #### Multimodal EmotionLines Dataset (Emotion) For the `meld_e` configuration one example from the dataset is: ``` { 'Utterance': "'Push 'em out , push 'em out , harder , harder .'", 'Speaker': "Joey", 'Emotion': 3, # "joy" 'Dialogue_ID': "1", 'Utterance_ID': "2" } ``` #### Multimodal EmotionLines Dataset (Sentiment) For the `meld_s` configuration one example from the dataset is: ``` { 'Utterance': "'Okay , y'know what ? There is no more left , left !'", 'Speaker': "Rachel", 'Sentiment': 0, # "negative" 'Dialogue_ID': "2", 'Utterance_ID': "4" } ``` #### ICSI MRDA Corpus For the `mrda` configuration one example from the dataset is: ``` { 'Utterance_ID': "Bed006-c2_0073656_0076706", 'Dialogue_Act': 0, # "s" 'Channel_ID': "Bed006-c2", 'Speaker': "mn015", 'Dialogue_ID': "Bed006", 'Utterance': "keith is not technically one of us yet ." } ``` #### BT OASIS Corpus For the `oasis` configuration one example from the dataset is: ``` { 'Speaker': "b", 'Utterance': "when i rang up um when i rang to find out why she said oh well your card's been declined", 'Dialogue_Act': 21, # "inform" } ``` #### SEMAINE database For the `sem` configuration one example from the dataset is: ``` { 'Utterance': "can you think of somebody who is like that ?", 'NbPairInSession': "11", 'Dialogue_ID': "59", 'SpeechTurn': "674", 'Speaker': "Agent", 'Sentiment': 1, # "Neutral" } ``` #### Switchboard Dialog Act (SwDA) Corpus For the `swda` configuration one example from the dataset is: ``` { 'Utterance': "but i 'd probably say that 's roughly right .", 'Dialogue_Act': 33, # "aap_am" 'From_Caller': "1255", 'To_Caller': "1087", 'Topic': "CRIME", 'Dialogue_ID': "818", 'Conv_ID': "sw2836", } ``` ### Data Fields For the `dyda_da` configuration, the different fields are: - `Utterance`: Utterance as a string. - `Dialogue_Act`: Dialog act label of the utterance. It can be one of "commissive" (0), "directive" (1), "inform" (2) or "question" (3). - `Dialogue_ID`: identifier of the dialogue as a string. For the `dyda_e` configuration, the different fields are: - `Utterance`: Utterance as a string. - `Dialogue_Act`: Dialog act label of the utterance. It can be one of "anger" (0), "disgust" (1), "fear" (2), "happiness" (3), "no emotion" (4), "sadness" (5) or "surprise" (6). - `Dialogue_ID`: identifier of the dialogue as a string. For the `iemocap` configuration, the different fields are: - `Dialogue_ID`: identifier of the dialogue as a string. - `Utterance_ID`: identifier of the utterance as a string. - `Utterance`: Utterance as a string. - `Emotion`: Emotion label of the utterance. It can be one of "Anger" (0), "Disgust" (1), "Excitement" (2), "Fear" (3), "Frustration" (4), "Happiness" (5), "Neutral" (6), "Other" (7), "Sadness" (8), "Surprise" (9) or "Unknown" (10). For the `maptask` configuration, the different fields are: - `Speaker`: identifier of the speaker as a string. - `Utterance`: Utterance as a string. - `Dialogue_Act`: Dialog act label of the utterance. It can be one of "acknowledge" (0), "align" (1), "check" (2), "clarify" (3), "explain" (4), "instruct" (5), "query_w" (6), "query_yn" (7), "ready" (8), "reply_n" (9), "reply_w" (10) or "reply_y" (11). For the `meld_e` configuration, the different fields are: - `Utterance`: Utterance as a string. - `Speaker`: Speaker as a string. - `Emotion`: Emotion label of the utterance. It can be one of "anger" (0), "disgust" (1), "fear" (2), "joy" (3), "neutral" (4), "sadness" (5) or "surprise" (6). - `Dialogue_ID`: identifier of the dialogue as a string. - `Utterance_ID`: identifier of the utterance as a string. For the `meld_s` configuration, the different fields are: - `Utterance`: Utterance as a string. - `Speaker`: Speaker as a string. - `Sentiment`: Sentiment label of the utterance. It can be one of "negative" (0), "neutral" (1) or "positive" (2). - `Dialogue_ID`: identifier of the dialogue as a string. - `Utterance_ID`: identifier of the utterance as a string. For the `mrda` configuration, the different fields are: - `Utterance_ID`: identifier of the utterance as a string. - `Dialogue_Act`: Dialog act label of the utterance. It can be one of "s" (0) [Statement/Subjective Statement], "d" (1) [Declarative Question], "b" (2) [Backchannel], "f" (3) [Follow-me] or "q" (4) [Question]. - `Channel_ID`: identifier of the channel as a string. - `Speaker`: identifier of the speaker as a string. - `Dialogue_ID`: identifier of the channel as a string. - `Utterance`: Utterance as a string. For the `oasis` configuration, the different fields are: - `Speaker`: identifier of the speaker as a string. - `Utterance`: Utterance as a string. - `Dialogue_Act`: Dialog act label of the utterance. It can be one of "accept" (0), "ackn" (1), "answ" (2), "answElab" (3), "appreciate" (4), "backch" (5), "bye" (6), "complete" (7), "confirm" (8), "correct" (9), "direct" (10), "directElab" (11), "echo" (12), "exclaim" (13), "expressOpinion"(14), "expressPossibility" (15), "expressRegret" (16), "expressWish" (17), "greet" (18), "hold" (19), "identifySelf" (20), "inform" (21), "informCont" (22), "informDisc" (23), "informIntent" (24), "init" (25), "negate" (26), "offer" (27), "pardon" (28), "raiseIssue" (29), "refer" (30), "refuse" (31), "reqDirect" (32), "reqInfo" (33), "reqModal" (34), "selfTalk" (35), "suggest" (36), "thank" (37), "informIntent-hold" (38), "correctSelf" (39), "expressRegret-inform" (40) or "thank-identifySelf" (41). For the `sem` configuration, the different fields are: - `Utterance`: Utterance as a string. - `NbPairInSession`: number of utterance pairs in a dialogue. - `Dialogue_ID`: identifier of the dialogue as a string. - `SpeechTurn`: SpeakerTurn as a string. - `Speaker`: Speaker as a string. - `Sentiment`: Sentiment label of the utterance. It can be "Negative", "Neutral" or "Positive". For the `swda` configuration, the different fields are: `Utterance`: Utterance as a string. `Dialogue_Act`: Dialogue act label of the utterance. It can be "sd" (0) [Statement-non-opinion], "b" (1) [Acknowledge (Backchannel)], "sv" (2) [Statement-opinion], "%" (3) [Uninterpretable], "aa" (4) [Agree/Accept], "ba" (5) [Appreciation], "fc" (6) [Conventional-closing], "qw" (7) [Wh-Question], "nn" (8) [No Answers], "bk" (9) [Response Acknowledgement], "h" (10) [Hedge], "qy^d" (11) [Declarative Yes-No-Question], "bh" (12) [Backchannel in Question Form], "^q" (13) [Quotation], "bf" (14) [Summarize/Reformulate], 'fo_o_fw_"_by_bc' (15) [Other], 'fo_o_fw_by_bc_"' (16) [Other], "na" (17) [Affirmative Non-yes Answers], "ad" (18) [Action-directive], "^2" (19) [Collaborative Completion], "b^m" (20) [Repeat-phrase], "qo" (21) [Open-Question], "qh" (22) [Rhetorical-Question], "^h" (23) [Hold Before Answer/Agreement], "ar" (24) [Reject], "ng" (25) [Negative Non-no Answers], "br" (26) [Signal-non-understanding], "no" (27) [Other Answers], "fp" (28) [Conventional-opening], "qrr" (29) [Or-Clause], "arp_nd" (30) [Dispreferred Answers], "t3" (31) [3rd-party-talk], "oo_co_cc" (32) [Offers, Options Commits], "aap_am" (33) [Maybe/Accept-part], "t1" (34) [Downplayer], "bd" (35) [Self-talk], "^g" (36) [Tag-Question], "qw^d" (37) [Declarative Wh-Question], "fa" (38) [Apology], "ft" (39) [Thanking], "+" (40) [Unknown], "x" (41) [Unknown], "ny" (42) [Unknown], "sv_fx" (43) [Unknown], "qy_qr" (44) [Unknown] or "ba_fe" (45) [Unknown]. `From_Caller`: identifier of the from caller as a string. `To_Caller`: identifier of the to caller as a string. `Topic`: Topic as a string. `Dialogue_ID`: identifier of the dialogue as a string. `Conv_ID`: identifier of the conversation as a string. ### Data Splits | Dataset name | Train | Valid | Test | | ------------ | ----- | ----- | ---- | | dyda_da | 87170 | 8069 | 7740 | | dyda_e | 87170 | 8069 | 7740 | | iemocap | 7213 | 805 | 2021 | | maptask | 20905 | 2963 | 2894 | | meld_e | 9989 | 1109 | 2610 | | meld_s | 9989 | 1109 | 2610 | | mrda | 83944 | 9815 | 15470 | | oasis | 12076 | 1513 | 1478 | | sem | 4264 | 485 | 878 | | swda | 190709 | 21203 | 2714 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Benchmark Curators Emile Chapuis, Pierre Colombo, Ebenge Usip. ### Licensing Information This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Unported License](https://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information ``` @inproceedings{chapuis-etal-2020-hierarchical, title = "Hierarchical Pre-training for Sequence Labelling in Spoken Dialog", author = "Chapuis, Emile and Colombo, Pierre and Manica, Matteo and Labeau, Matthieu and Clavel, Chlo{\'e}", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.239", doi = "10.18653/v1/2020.findings-emnlp.239", pages = "2636--2648", abstract = "Sequence labelling tasks like Dialog Act and Emotion/Sentiment identification are a key component of spoken dialog systems. In this work, we propose a new approach to learn generic representations adapted to spoken dialog, which we evaluate on a new benchmark we call Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE benchmark (SILICONE). SILICONE is model-agnostic and contains 10 different datasets of various sizes. We obtain our representations with a hierarchical encoder based on transformer architectures, for which we extend two well-known pre-training objectives. Pre-training is performed on OpenSubtitles: a large corpus of spoken dialog containing over 2.3 billion of tokens. We demonstrate how hierarchical encoders achieve competitive results with consistently fewer parameters compared to state-of-the-art models and we show their importance for both pre-training and fine-tuning.", } ``` ### Contributions Thanks to [@eusip](https://github.com/eusip) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
result-kand2-sdxl-wuerst-karlo/ce65a06b
2023-09-21T02:55:46.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
585
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 187 num_examples: 10 download_size: 1357 dataset_size: 187 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ce65a06b" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mteb/arxiv-clustering-s2s
2022-09-27T19:12:49.000Z
[ "language:en", "region:us" ]
mteb
null
null
null
0
581
--- language: - en ---
HumanCompatibleAI/ppo-CartPole-v1
2023-07-18T14:43:49.000Z
[ "region:us" ]
HumanCompatibleAI
null
null
null
0
578
--- dataset_info: features: - name: obs sequence: sequence: float32 - name: acts sequence: int64 - name: infos sequence: string - name: terminal dtype: bool - name: rews sequence: float64 splits: - name: train num_bytes: 2103613 num_examples: 100 download_size: 1263834 dataset_size: 2103613 --- # Dataset Card for "ppo-CartPole-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hippocrates/qa_train
2023-10-03T03:42:29.000Z
[ "region:us" ]
hippocrates
null
null
null
0
576
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: valid path: data/valid-* dataset_info: features: - name: id dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string - name: text dtype: string splits: - name: train num_bytes: 485067176 num_examples: 404269 - name: valid num_bytes: 4491759 num_examples: 5505 download_size: 241040216 dataset_size: 489558935 --- # Dataset Card for "qa_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
stas/oscar-en-10k
2022-10-19T21:40:14.000Z
[ "language:en", "license:apache-2.0", "region:us" ]
stas
This is a small subset representing 10K records from the original OSCAR dataset, "unshuffled_deduplicated_en" subset - created for testing. The records were extracted after having been shuffled. The full 1TB+ dataset is at https://huggingface.co/datasets/oscar.
@inproceedings{OrtizSuarezSagotRomary2019, author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary}, title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019}, editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi}, publisher = {Leibniz-Institut f{"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-9021}, url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215}, pages = {9 -- 16}, year = {2019}, abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.}, language = {en} }
null
2
575
--- language: - en license: apache-2.0 --- # OSCAR EN 10K for testing This is a small subset representing the 10K records from the original OSCAR dataset, "unshuffled_deduplicated_en" subset - created for testing. The records were extracted after having been shuffled. The full 1TB+ dataset is at https://huggingface.co/datasets/oscar. ``` $ python -c "from datasets import load_dataset; ds=load_dataset('stas/oscar-en-10k'); print(ds)" DatasetDict({ train: Dataset({ features: ['text'], num_rows: 10000 }) }) ``` * Records: 10,000 * compressed size: ~37MB * uncompressed size: 131MB To convert to jsonlines: ``` from datasets import load_dataset dataset_name = "stas/oscar-en-10k" name = dataset_name.split('/')[-1] ds = load_dataset(dataset_name, split='train') ds.to_json(f"{name}.jsonl", orient="records", lines=True) ``` To see how this subset was created, here is the [instructions file](https://huggingface.co/datasets/stas/oscar-en-10k/blob/main/process.txt).
HuggingFaceM4/FairFace
2022-12-09T00:14:46.000Z
[ "license:cc-by-4.0", "region:us" ]
HuggingFaceM4
FairFace is a face image dataset which is race balanced. It contains 108,501 images from 7 different race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups.
@inproceedings{karkkainenfairface, title={FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation}, author={Karkkainen, Kimmo and Joo, Jungseock}, booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision}, year={2021}, pages={1548--1558} }
null
5
575
--- license: cc-by-4.0 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/joojs/fairface](https://github.com/joojs/fairface) - **Repository:** [https://github.com/joojs/fairface](https://github.com/joojs/fairface) - **Paper:** [https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf](https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary FairFace is a face image dataset which is race balanced. It contains 108,501 images from 7 different race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Each instance has the following structure: ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=448x448 at 0x7FCABA221FA0>, 'age': 6, 'gender': 0, 'race': 0, 'service_test': True } ``` ### Data Fields - `image`: The image - `age`: Age class among `["0-2", "3-9", "10-19", "20-29", "30-39", "40-49", "50-59", "60-69", "more than 70"]` - `gender`: Gender class among `["Male", "Female"]` - `race`: Race class among `["East Asian", "Indian", "Black", "White", "Middle Eastern", "Latino_Hispanic", "Southeast Asian"]` - `service_test`: Not sure what this is. See [issue](https://github.com/joojs/fairface/issues/9). ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
C-MTEB/DuRetrieval
2023-07-28T09:48:49.000Z
[ "region:us" ]
C-MTEB
null
null
null
0
574
--- configs: - config_name: default data_files: - split: corpus path: data/corpus-* - split: queries path: data/queries-* dataset_info: features: - name: id dtype: string - name: text dtype: string splits: - name: corpus num_bytes: 91213303 num_examples: 100001 - name: queries num_bytes: 131354 num_examples: 2000 download_size: 64531170 dataset_size: 91344657 --- # Dataset Card for "DuRetrieval" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shmuhammad/AfriSenti-twitter-sentiment
2023-09-03T09:59:15.000Z
[ "task_categories:text-classification", "task_ids:sentiment-analysis", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:semantic-similarity-classification", "task_ids:semantic-similarity-scoring", "multilinguality:monolingual", "multilinguality:multilingual", "size_categories:100K<n<1M", "language:amh", "language:ary", "language:ar", "language:arq", "language:hau", "language:ibo", "language:kin", "language:por", "language:pcm", "language:eng", "language:oro", "language:swa", "language:tir", "language:twi", "language:tso", "language:yor", "sentiment analysis, Twitter, tweets", "sentiment", "arxiv:2302.08956", "arxiv:2304.06845", "arxiv:2201.08277", "region:us" ]
shmuhammad
AfriSenti is the largest sentiment analysis benchmark dataset for under-represented African languages---covering 110,000+ annotated tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and yoruba).
@inproceedings{muhammad-etal-2023-semeval, title="{S}em{E}val-2023 Task 12: Sentiment Analysis for African Languages ({A}fri{S}enti-{S}em{E}val)", author="Muhammad, Shamsuddeen Hassan and Yimam, Seid and Abdulmumin, Idris and Ahmad, Ibrahim Sa'id and Ousidhoum, Nedjma, and Ayele, Abinew, and Adelani, David and Ruder, Sebastian and Beloucif, Meriem and Bello, Shehu Bello and Mohammad, Saif M.", booktitle="Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month=jul, year="2023", }
null
3
572
--- task_categories: - text-classification task_ids: - sentiment-analysis - sentiment-classification - sentiment-scoring - semantic-similarity-classification - semantic-similarity-scoring tags: - sentiment analysis, Twitter, tweets - sentiment multilinguality: - monolingual - multilingual size_categories: - 100K<n<1M language: - amh - ary - ar - arq - hau - ibo - kin - por - pcm - eng - oro - swa - tir - twi - tso - yor pretty_name: AfriSenti --- # Dataset Card for AfriSenti Dataset <p align="center"> <img src="https://raw.githubusercontent.com/afrisenti-semeval/afrisent-semeval-2023/main/images/afrisenti-twitter.png", width="700" height="500"> -------------------------------------------------------------------------------- ## Dataset Description - **Homepage:** https://github.com/afrisenti-semeval/afrisent-semeval-2023 - **Repository:** [GitHub](https://github.com/afrisenti-semeval/afrisent-semeval-2023) - **Paper:** [AfriSenti: AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages](https://arxiv.org/pdf/2302.08956.pdf) - **Paper:** [SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)](https://arxiv.org/pdf/2304.06845.pdf) - **Paper:** [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://arxiv.org/pdf/2201.08277.pdf) - **Leaderboard:** N/A - **Point of Contact:** [shamsuddeen Muhammad](shamsuddeen2004@gmail.com) ### Dataset Summary AfriSenti is the largest sentiment analysis dataset for under-represented African languages, covering 110,000+ annotated tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yoruba). The datasets are used in the first Afrocentric SemEval shared task, SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval). AfriSenti allows the research community to build sentiment analysis systems for various African languages and enables the study of sentiment and contemporary language use in African languages. ### Supported Tasks and Leaderboards The AfriSenti can be used for a wide range of sentiment analysis tasks in African languages, such as sentiment classification, sentiment intensity analysis, and emotion detection. This dataset is suitable for training and evaluating machine learning models for various NLP tasks related to sentiment analysis in African languages. [SemEval 2023 Task 12 : Sentiment Analysis for African Languages](https://codalab.lisn.upsaclay.fr/competitions/7320) ### Languages 14 African languages (Amharic (amh), Algerian Arabic (ary), Hausa(hau), Igbo(ibo), Kinyarwanda(kin), Moroccan Arabic/Darija(arq), Mozambican Portuguese(por), Nigerian Pidgin (pcm), Oromo (oro), Swahili(swa), Tigrinya(tir), Twi(twi), Xitsonga(tso), and Yoruba(yor)). ## Dataset Structure ### Data Instances For each instance, there is a string for the tweet and a string for the label. See the AfriSenti [dataset viewer](https://huggingface.co/datasets/shmuhammad/AfriSenti/viewer/shmuhammad--AfriSenti/train) to explore more examples. ``` { "tweet": "string", "label": "string" } ``` ### Data Fields The data fields are: ``` tweet: a string feature. label: a classification label, with possible values including positive, negative and neutral. ``` ### Data Splits The AfriSenti dataset has 3 splits: train, validation, and test. Below are the statistics for Version 1.0.0 of the dataset. | | ama | arq | hau | ibo | ary | orm | pcm | pt-MZ | kin | swa | tir | tso | twi | yo | |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | train | 5,982 | 1,652 | 14,173 | 10,193 | 5,584| - | 5,122 | 3,064 | 3,303 | 1,811 | - | 805 | 3,482| 8,523 | | dev | 1,498 | 415 | 2,678 | 1,842 | 1,216 | 397 | 1,282 | 768 | 828 | 454 | 399 | 204 | 389 | 2,091 | | test | 2,000 | 959 | 5,304 | 3,683 | 2,962 | 2,097 | 4,155 | 3,663 | 1,027 | 749 | 2,001 | 255 | 950 | 4,516 | | total | 9,483 | 3,062 | 22,155 | 15,718 | 9,762 | 2,494 | 10,559 | 7,495 | 5,158 | 3,014 | 2,400 | 1,264 | 4,821 | 15,130 | ### How to use it ```python from datasets import load_dataset # you can load specific languages (e.g., Amharic). This download train, validation and test sets. ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh") # train set only ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh", split = "train") # test set only ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh", split = "test") # validation set only ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh", split = "validation") ``` ## Dataset Creation ### Curation Rationale AfriSenti Version 1.0.0 aimed to be used in the first Afrocentric SemEval shared task **[SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval)](https://afrisenti-semeval.github.io)**. ### Source Data Twitter #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information We anonymized the tweets by replacing all *@mentions* by *@user* and removed all URLs. ## Considerations for Using the Data ### Social Impact of Dataset The Afrisenti dataset has the potential to improve sentiment analysis for African languages, which is essential for understanding and analyzing the diverse perspectives of people in the African continent. This dataset can enable researchers and developers to create sentiment analysis models that are specific to African languages, which can be used to gain insights into the social, cultural, and political views of people in African countries. Furthermore, this dataset can help address the issue of underrepresentation of African languages in natural language processing, paving the way for more equitable and inclusive AI technologies. [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators AfriSenti is an extension of NaijaSenti, a dataset consisting of four Nigerian languages: Hausa, Yoruba, Igbo, and Nigerian-Pidgin. This dataset has been expanded to include other 10 African languages, and was curated with the help of the following: | Language | Dataset Curators | |---|---| | Algerian Arabic (arq) | Nedjma Ousidhoum, Meriem Beloucif | | Amharic (ama) | Abinew Ali Ayele, Seid Muhie Yimam | | Hausa (hau) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello | | Igbo (ibo) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello | | Kinyarwanda (kin)| Samuel Rutunda | | Moroccan Arabic/Darija (ary) | Oumaima Hourrane | | Mozambique Portuguese (pt-MZ) | Felermino Dário Mário António Ali | | Nigerian Pidgin (pcm) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello | | Oromo (orm) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay | | Swahili (swa) | Davis Davis | | Tigrinya (tir) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay | | Twi (twi) | Salomey Osei, Bernard Opoku, Steven Arthur | | Xithonga (tso) | Felermino Dário Mário António Ali | | Yoruba (yor) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello | ### Licensing Information This AfriSenti is licensed under a Creative Commons Attribution 4.0 International License ### Citation Information ``` @inproceedings{Muhammad2023AfriSentiAT, title={AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages}, author={Shamsuddeen Hassan Muhammad and Idris Abdulmumin and Abinew Ali Ayele and Nedjma Ousidhoum and David Ifeoluwa Adelani and Seid Muhie Yimam and Ibrahim Sa'id Ahmad and Meriem Beloucif and Saif Mohammad and Sebastian Ruder and Oumaima Hourrane and Pavel Brazdil and Felermino D'ario M'ario Ant'onio Ali and Davis Davis and Salomey Osei and Bello Shehu Bello and Falalu Ibrahim and Tajuddeen Gwadabe and Samuel Rutunda and Tadesse Belay and Wendimu Baye Messelle and Hailu Beshada Balcha and Sisay Adugna Chala and Hagos Tesfahun Gebremichael and Bernard Opoku and Steven Arthur}, year={2023} } ``` ``` @article{muhammad2023semeval, title={SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)}, author={Muhammad, Shamsuddeen Hassan and Abdulmumin, Idris and Yimam, Seid Muhie and Adelani, David Ifeoluwa and Ahmad, Ibrahim Sa'id and Ousidhoum, Nedjma and Ayele, Abinew and Mohammad, Saif M and Beloucif, Meriem}, journal={arXiv preprint arXiv:2304.06845}, year={2023} } ``` ### Contributions [More Information Needed]
alespalla/chatbot_instruction_prompts
2023-03-21T13:36:36.000Z
[ "task_categories:question-answering", "task_categories:conversational", "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
alespalla
null
null
null
23
572
--- license: apache-2.0 dataset_info: features: - name: response dtype: string - name: prompt dtype: string splits: - name: test num_bytes: 24612503 num_examples: 64511 - name: train num_bytes: 98485829 num_examples: 258042 download_size: 78591384 dataset_size: 123098332 task_categories: - question-answering - conversational - text-generation language: - en size_categories: - 100K<n<1M --- # Dataset Card for Chatbot Instruction Prompts Datasets ### Dataset Summary This dataset has been generated from the following ones: - `tatsu-lab/alpaca` - `Dahoas/instruct-human-assistant-prompt` - `allenai/prosocial-dialog` The datasets has been cleaned up of spurious entries and artifacts. It contains ~500k of prompt and expected resposne. This DB is intended to train an instruct-type model
alzoubi36/policy_detection
2023-06-24T06:26:17.000Z
[ "region:us" ]
alzoubi36
null
null
null
0
570
--- dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 8258295 num_examples: 773 - name: validation num_bytes: 1340647 num_examples: 137 - name: test num_bytes: 3702713 num_examples: 391 download_size: 6887636 dataset_size: 13301655 --- # Dataset for the policy detection task in the [PrivacyGLUE](https://github.com/infsys-lab/privacy-glue) dataset
alexandrainst/audio_test_dataset
2023-05-01T14:28:58.000Z
[ "size_categories:n<1K", "language:da", "license:cc0-1.0", "region:us" ]
alexandrainst
null
null
null
0
569
--- dataset_info: features: - name: client_id dtype: string - name: path dtype: string - name: audio dtype: audio: sampling_rate: 48000 - name: sentence dtype: string - name: up_votes dtype: int64 - name: down_votes dtype: int64 - name: age dtype: string - name: gender dtype: string - name: accent dtype: string - name: locale dtype: string - name: segment dtype: string - name: variant dtype: string splits: - name: train num_bytes: 108571 num_examples: 5 - name: validation num_bytes: 116850 num_examples: 5 - name: test num_bytes: 78943 num_examples: 5 - name: other num_bytes: 101436 num_examples: 5 - name: invalidated num_bytes: 156925 num_examples: 5 download_size: 590682 dataset_size: 562725 license: cc0-1.0 language: - da size_categories: - n<1K --- # Dataset Card for "audio_test_dataset" This dataset consists of the first 5 samples of [mozilla-foundation/common_voice_13_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) and is only used for unit testing.
BeIR/fever-qrels
2022-10-23T06:08:11.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
null
0
568
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
pinecone/core-2020-05-10-deduplication
2022-10-28T03:01:02.000Z
[ "task_categories:other", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:unknown", "language:en", "license:mit", "deduplication", "region:us" ]
pinecone
null
null
null
1
566
--- annotations_creators: - unknown language_creators: - unknown language: - en license: - mit multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - unknown task_categories: - other task_ids: - natural-language-inference - semantic-similarity-scoring - text-scoring pretty_name: CORE Deduplication of Scholarly Documents tags: - deduplication --- # Dataset Card for CORE Deduplication ## Dataset Description - **Homepage:** [https://core.ac.uk/about/research-outputs](https://core.ac.uk/about/research-outputs) - **Repository:** [https://core.ac.uk/datasets/core_2020-05-10_deduplication.zip](https://core.ac.uk/datasets/core_2020-05-10_deduplication.zip) - **Paper:** [Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings](http://oro.open.ac.uk/id/eprint/70519) - **Point of Contact:** [CORE Team](https://core.ac.uk/about#contact) - **Size of downloaded dataset files:** 204 MB ### Dataset Summary CORE 2020 Deduplication dataset (https://core.ac.uk/documentation/dataset) contains 100K scholarly documents labeled as duplicates/non-duplicates. ### Languages The dataset language is English (BCP-47 `en`) ### Citation Information ``` @inproceedings{dedup2020, title={Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings}, author={Gyawali, Bikash and Anastasiou, Lucas and Knoth, Petr}, booktitle = {Proceedings of 12th Language Resources and Evaluation Conference}, month = may, year = 2020, publisher = {France European Language Resources Association}, pages = {894-903} } ```
C-MTEB/DuRetrieval-qrels
2023-07-28T09:48:53.000Z
[ "region:us" ]
C-MTEB
null
null
null
0
566
--- configs: - config_name: default data_files: - split: dev path: data/dev-* dataset_info: features: - name: qid dtype: string - name: pid dtype: string - name: score dtype: int64 splits: - name: dev num_bytes: 787120 num_examples: 9839 download_size: 420443 dataset_size: 787120 --- # Dataset Card for "DuRetrieval-qrels" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
md_gender_bias
2023-06-01T14:59:54.000Z
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "annotations_creators:found", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:1M<n<10M", "size_categories:n<1K", "source_datasets:extended|other-convai2", "source_datasets:extended|other-light", "source_datasets:extended|other-opensubtitles", "source_datasets:extended|other-yelp", "source_datasets:original", "language:en", "license:mit", "gender-bias", "arxiv:1811.00552", "region:us" ]
null
Machine learning models are trained to find patterns in data. NLP models can inadvertently learn socially undesirable patterns when training on gender biased text. In this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information. In addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites. Distinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers. We show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models, detecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.
@inproceedings{md_gender_bias, author = {Emily Dinan and Angela Fan and Ledell Wu and Jason Weston and Douwe Kiela and Adina Williams}, editor = {Bonnie Webber and Trevor Cohn and Yulan He and Yang Liu}, title = {Multi-Dimensional Gender Bias Classification}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, {EMNLP} 2020, Online, November 16-20, 2020}, pages = {314--331}, publisher = {Association for Computational Linguistics}, year = {2020}, url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/} }
null
13
565
--- annotations_creators: - crowdsourced - found - machine-generated language_creators: - crowdsourced - found language: - en license: - mit multilinguality: - monolingual size_categories: - 100K<n<1M - 10K<n<100K - 1K<n<10K - 1M<n<10M - n<1K source_datasets: - extended|other-convai2 - extended|other-light - extended|other-opensubtitles - extended|other-yelp - original task_categories: - text-classification task_ids: [] paperswithcode_id: md-gender pretty_name: Multi-Dimensional Gender Bias Classification tags: - gender-bias dataset_info: - config_name: gendered_words features: - name: word_masculine dtype: string - name: word_feminine dtype: string splits: - name: train num_bytes: 4988 num_examples: 222 download_size: 232629010 dataset_size: 4988 - config_name: name_genders features: - name: name dtype: string - name: assigned_gender dtype: class_label: names: '0': M '1': F - name: count dtype: int32 splits: - name: yob1880 num_bytes: 43404 num_examples: 2000 - name: yob1881 num_bytes: 41944 num_examples: 1935 - name: yob1882 num_bytes: 46211 num_examples: 2127 - name: yob1883 num_bytes: 45221 num_examples: 2084 - name: yob1884 num_bytes: 49886 num_examples: 2297 - name: yob1885 num_bytes: 49810 num_examples: 2294 - name: yob1886 num_bytes: 51935 num_examples: 2392 - name: yob1887 num_bytes: 51458 num_examples: 2373 - name: yob1888 num_bytes: 57531 num_examples: 2651 - name: yob1889 num_bytes: 56177 num_examples: 2590 - name: yob1890 num_bytes: 58509 num_examples: 2695 - name: yob1891 num_bytes: 57767 num_examples: 2660 - name: yob1892 num_bytes: 63493 num_examples: 2921 - name: yob1893 num_bytes: 61525 num_examples: 2831 - name: yob1894 num_bytes: 63927 num_examples: 2941 - name: yob1895 num_bytes: 66346 num_examples: 3049 - name: yob1896 num_bytes: 67224 num_examples: 3091 - name: yob1897 num_bytes: 65886 num_examples: 3028 - name: yob1898 num_bytes: 71088 num_examples: 3264 - name: yob1899 num_bytes: 66225 num_examples: 3042 - name: yob1900 num_bytes: 81305 num_examples: 3730 - name: yob1901 num_bytes: 68723 num_examples: 3153 - name: yob1902 num_bytes: 73321 num_examples: 3362 - name: yob1903 num_bytes: 74019 num_examples: 3389 - name: yob1904 num_bytes: 77751 num_examples: 3560 - name: yob1905 num_bytes: 79802 num_examples: 3655 - name: yob1906 num_bytes: 79392 num_examples: 3633 - name: yob1907 num_bytes: 86342 num_examples: 3948 - name: yob1908 num_bytes: 87965 num_examples: 4018 - name: yob1909 num_bytes: 92591 num_examples: 4227 - name: yob1910 num_bytes: 101491 num_examples: 4629 - name: yob1911 num_bytes: 106787 num_examples: 4867 - name: yob1912 num_bytes: 139448 num_examples: 6351 - name: yob1913 num_bytes: 153110 num_examples: 6968 - name: yob1914 num_bytes: 175167 num_examples: 7965 - name: yob1915 num_bytes: 205921 num_examples: 9357 - name: yob1916 num_bytes: 213468 num_examples: 9696 - name: yob1917 num_bytes: 218446 num_examples: 9913 - name: yob1918 num_bytes: 229209 num_examples: 10398 - name: yob1919 num_bytes: 228656 num_examples: 10369 - name: yob1920 num_bytes: 237286 num_examples: 10756 - name: yob1921 num_bytes: 239616 num_examples: 10857 - name: yob1922 num_bytes: 237569 num_examples: 10756 - name: yob1923 num_bytes: 235046 num_examples: 10643 - name: yob1924 num_bytes: 240113 num_examples: 10869 - name: yob1925 num_bytes: 235098 num_examples: 10638 - name: yob1926 num_bytes: 230970 num_examples: 10458 - name: yob1927 num_bytes: 230004 num_examples: 10406 - name: yob1928 num_bytes: 224583 num_examples: 10159 - name: yob1929 num_bytes: 217057 num_examples: 9820 - name: yob1930 num_bytes: 216352 num_examples: 9791 - name: yob1931 num_bytes: 205361 num_examples: 9298 - name: yob1932 num_bytes: 207268 num_examples: 9381 - name: yob1933 num_bytes: 199031 num_examples: 9013 - name: yob1934 num_bytes: 202758 num_examples: 9180 - name: yob1935 num_bytes: 199614 num_examples: 9037 - name: yob1936 num_bytes: 196379 num_examples: 8894 - name: yob1937 num_bytes: 197757 num_examples: 8946 - name: yob1938 num_bytes: 199603 num_examples: 9032 - name: yob1939 num_bytes: 196979 num_examples: 8918 - name: yob1940 num_bytes: 198141 num_examples: 8961 - name: yob1941 num_bytes: 200858 num_examples: 9085 - name: yob1942 num_bytes: 208363 num_examples: 9425 - name: yob1943 num_bytes: 207940 num_examples: 9408 - name: yob1944 num_bytes: 202227 num_examples: 9152 - name: yob1945 num_bytes: 199478 num_examples: 9025 - name: yob1946 num_bytes: 214614 num_examples: 9705 - name: yob1947 num_bytes: 229327 num_examples: 10371 - name: yob1948 num_bytes: 226615 num_examples: 10241 - name: yob1949 num_bytes: 227278 num_examples: 10269 - name: yob1950 num_bytes: 227946 num_examples: 10303 - name: yob1951 num_bytes: 231613 num_examples: 10462 - name: yob1952 num_bytes: 235483 num_examples: 10646 - name: yob1953 num_bytes: 239654 num_examples: 10837 - name: yob1954 num_bytes: 242389 num_examples: 10968 - name: yob1955 num_bytes: 245652 num_examples: 11115 - name: yob1956 num_bytes: 250674 num_examples: 11340 - name: yob1957 num_bytes: 255370 num_examples: 11564 - name: yob1958 num_bytes: 254520 num_examples: 11522 - name: yob1959 num_bytes: 260051 num_examples: 11767 - name: yob1960 num_bytes: 263474 num_examples: 11921 - name: yob1961 num_bytes: 269493 num_examples: 12182 - name: yob1962 num_bytes: 270244 num_examples: 12209 - name: yob1963 num_bytes: 271872 num_examples: 12282 - name: yob1964 num_bytes: 274590 num_examples: 12397 - name: yob1965 num_bytes: 264889 num_examples: 11952 - name: yob1966 num_bytes: 269321 num_examples: 12151 - name: yob1967 num_bytes: 274867 num_examples: 12397 - name: yob1968 num_bytes: 286774 num_examples: 12936 - name: yob1969 num_bytes: 304909 num_examples: 13749 - name: yob1970 num_bytes: 328047 num_examples: 14779 - name: yob1971 num_bytes: 339657 num_examples: 15295 - name: yob1972 num_bytes: 342321 num_examples: 15412 - name: yob1973 num_bytes: 348414 num_examples: 15682 - name: yob1974 num_bytes: 361188 num_examples: 16249 - name: yob1975 num_bytes: 376491 num_examples: 16944 - name: yob1976 num_bytes: 386565 num_examples: 17391 - name: yob1977 num_bytes: 403994 num_examples: 18175 - name: yob1978 num_bytes: 405430 num_examples: 18231 - name: yob1979 num_bytes: 423423 num_examples: 19039 - name: yob1980 num_bytes: 432317 num_examples: 19452 - name: yob1981 num_bytes: 432980 num_examples: 19475 - name: yob1982 num_bytes: 437986 num_examples: 19694 - name: yob1983 num_bytes: 431531 num_examples: 19407 - name: yob1984 num_bytes: 434085 num_examples: 19506 - name: yob1985 num_bytes: 447113 num_examples: 20085 - name: yob1986 num_bytes: 460315 num_examples: 20657 - name: yob1987 num_bytes: 477677 num_examples: 21406 - name: yob1988 num_bytes: 499347 num_examples: 22367 - name: yob1989 num_bytes: 531020 num_examples: 23775 - name: yob1990 num_bytes: 552114 num_examples: 24716 - name: yob1991 num_bytes: 560932 num_examples: 25109 - name: yob1992 num_bytes: 568151 num_examples: 25427 - name: yob1993 num_bytes: 579778 num_examples: 25966 - name: yob1994 num_bytes: 580223 num_examples: 25997 - name: yob1995 num_bytes: 581949 num_examples: 26080 - name: yob1996 num_bytes: 589131 num_examples: 26423 - name: yob1997 num_bytes: 601284 num_examples: 26970 - name: yob1998 num_bytes: 621587 num_examples: 27902 - name: yob1999 num_bytes: 635355 num_examples: 28552 - name: yob2000 num_bytes: 662398 num_examples: 29772 - name: yob2001 num_bytes: 673111 num_examples: 30274 - name: yob2002 num_bytes: 679392 num_examples: 30564 - name: yob2003 num_bytes: 692931 num_examples: 31185 - name: yob2004 num_bytes: 711776 num_examples: 32048 - name: yob2005 num_bytes: 723065 num_examples: 32549 - name: yob2006 num_bytes: 757620 num_examples: 34088 - name: yob2007 num_bytes: 776893 num_examples: 34961 - name: yob2008 num_bytes: 779403 num_examples: 35079 - name: yob2009 num_bytes: 771032 num_examples: 34709 - name: yob2010 num_bytes: 756717 num_examples: 34073 - name: yob2011 num_bytes: 752804 num_examples: 33908 - name: yob2012 num_bytes: 748915 num_examples: 33747 - name: yob2013 num_bytes: 738288 num_examples: 33282 - name: yob2014 num_bytes: 737219 num_examples: 33243 - name: yob2015 num_bytes: 734183 num_examples: 33121 - name: yob2016 num_bytes: 731291 num_examples: 33010 - name: yob2017 num_bytes: 721444 num_examples: 32590 - name: yob2018 num_bytes: 708657 num_examples: 32033 download_size: 232629010 dataset_size: 43393095 - config_name: new_data features: - name: text dtype: string - name: original dtype: string - name: labels list: class_label: names: '0': ABOUT:female '1': ABOUT:male '2': PARTNER:female '3': PARTNER:male '4': SELF:female '5': SELF:male - name: class_type dtype: class_label: names: '0': about '1': partner '2': self - name: turker_gender dtype: class_label: names: '0': man '1': woman '2': nonbinary '3': prefer not to say '4': no answer - name: episode_done dtype: bool_ - name: confidence dtype: string splits: - name: train num_bytes: 369753 num_examples: 2345 download_size: 232629010 dataset_size: 369753 - config_name: funpedia features: - name: text dtype: string - name: title dtype: string - name: persona dtype: string - name: gender dtype: class_label: names: '0': gender-neutral '1': female '2': male splits: - name: train num_bytes: 3225542 num_examples: 23897 - name: validation num_bytes: 402205 num_examples: 2984 - name: test num_bytes: 396417 num_examples: 2938 download_size: 232629010 dataset_size: 4024164 - config_name: image_chat features: - name: caption dtype: string - name: id dtype: string - name: male dtype: bool_ - name: female dtype: bool_ splits: - name: train num_bytes: 1061285 num_examples: 9997 - name: validation num_bytes: 35868670 num_examples: 338180 - name: test num_bytes: 530126 num_examples: 5000 download_size: 232629010 dataset_size: 37460081 - config_name: wizard features: - name: text dtype: string - name: chosen_topic dtype: string - name: gender dtype: class_label: names: '0': gender-neutral '1': female '2': male splits: - name: train num_bytes: 1158785 num_examples: 10449 - name: validation num_bytes: 57824 num_examples: 537 - name: test num_bytes: 53126 num_examples: 470 download_size: 232629010 dataset_size: 1269735 - config_name: convai2_inferred features: - name: text dtype: string - name: binary_label dtype: class_label: names: '0': ABOUT:female '1': ABOUT:male - name: binary_score dtype: float32 - name: ternary_label dtype: class_label: names: '0': ABOUT:female '1': ABOUT:male '2': ABOUT:gender-neutral - name: ternary_score dtype: float32 splits: - name: train num_bytes: 9853669 num_examples: 131438 - name: validation num_bytes: 608046 num_examples: 7801 - name: test num_bytes: 608046 num_examples: 7801 download_size: 232629010 dataset_size: 11069761 - config_name: light_inferred features: - name: text dtype: string - name: binary_label dtype: class_label: names: '0': ABOUT:female '1': ABOUT:male - name: binary_score dtype: float32 - name: ternary_label dtype: class_label: names: '0': ABOUT:female '1': ABOUT:male '2': ABOUT:gender-neutral - name: ternary_score dtype: float32 splits: - name: train num_bytes: 10931355 num_examples: 106122 - name: validation num_bytes: 679692 num_examples: 6362 - name: test num_bytes: 1375745 num_examples: 12765 download_size: 232629010 dataset_size: 12986792 - config_name: opensubtitles_inferred features: - name: text dtype: string - name: binary_label dtype: class_label: names: '0': ABOUT:female '1': ABOUT:male - name: binary_score dtype: float32 - name: ternary_label dtype: class_label: names: '0': ABOUT:female '1': ABOUT:male '2': ABOUT:gender-neutral - name: ternary_score dtype: float32 splits: - name: train num_bytes: 27966476 num_examples: 351036 - name: validation num_bytes: 3363802 num_examples: 41957 - name: test num_bytes: 3830528 num_examples: 49108 download_size: 232629010 dataset_size: 35160806 - config_name: yelp_inferred features: - name: text dtype: string - name: binary_label dtype: class_label: names: '0': ABOUT:female '1': ABOUT:male - name: binary_score dtype: float32 splits: - name: train num_bytes: 260582945 num_examples: 2577862 - name: validation num_bytes: 324349 num_examples: 4492 - name: test num_bytes: 53887700 num_examples: 534460 download_size: 232629010 dataset_size: 314794994 config_names: - convai2_inferred - funpedia - gendered_words - image_chat - light_inferred - name_genders - new_data - opensubtitles_inferred - wizard - yelp_inferred --- # Dataset Card for Multi-Dimensional Gender Bias Classification ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ParlAI MD Gender Project Page](https://parl.ai/projects/md_gender/) - **Repository:** [ParlAI Github MD Gender Repository](https://github.com/facebookresearch/ParlAI/tree/master/projects/md_gender) - **Paper:** [Multi-Dimensional Gender Bias Classification](https://www.aclweb.org/anthology/2020.emnlp-main.23.pdf) - **Leaderboard:** [Needs More Information] - **Point of Contact:** edinan@fb.com ### Dataset Summary The Multi-Dimensional Gender Bias Classification dataset is based on a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. It contains seven large scale datasets automatically annotated for gender information (there are eight in the original project but the Wikipedia set is not included in the HuggingFace distribution), one crowdsourced evaluation benchmark of utterance-level gender rewrites, a list of gendered names, and a list of gendered words in English. ### Supported Tasks and Leaderboards - `text-classification-other-gender-bias`: The dataset can be used to train a model for classification of various kinds of gender bias. The model performance is evaluated based on the accuracy of the predicted labels as compared to the given labels in the dataset. Dinan et al's (2020) Transformer model achieved an average of 67.13% accuracy in binary gender prediction across the ABOUT, TO, and AS tasks. See the paper for more results. ### Languages The data is in English as spoken on the various sites where the data was collected. The associated BCP-47 code `en`. ## Dataset Structure ### Data Instances The following are examples of data instances from the various configs in the dataset. See the [MD Gender Bias dataset viewer](https://huggingface.co/datasets/viewer/?dataset=md_gender_bias) to explore more examples. An example from the `new_data` config: ``` {'class_type': 0, 'confidence': 'certain', 'episode_done': True, 'labels': [1], 'original': 'She designed monumental Loviisa war cemetery in 1920', 'text': 'He designed monumental Lovissa War Cemetery in 1920.', 'turker_gender': 4} ``` An example from the `funpedia` config: ``` {'gender': 2, 'persona': 'Humorous', 'text': 'Max Landis is a comic book writer who wrote Chronicle, American Ultra, and Victor Frankestein.', 'title': 'Max Landis'} ``` An example from the `image_chat` config: ``` {'caption': '<start> a young girl is holding a pink umbrella in her hand <eos>', 'female': True, 'id': '2923e28b6f588aff2d469ab2cccfac57', 'male': False} ``` An example from the `wizard` config: ``` {'chosen_topic': 'Krav Maga', 'gender': 2, 'text': 'Hello. I hope you might enjoy or know something about Krav Maga?'} ``` An example from the `convai2_inferred` config (the other `_inferred` configs have the same fields, with the exception of `yelp_inferred`, which does not have the `ternary_label` or `ternary_score` fields): ``` {'binary_label': 1, 'binary_score': 0.6521999835968018, 'ternary_label': 2, 'ternary_score': 0.4496000111103058, 'text': "hi , how are you doing ? i'm getting ready to do some cheetah chasing to stay in shape ."} ``` An example from the `gendered_words` config: ``` {'word_feminine': 'countrywoman', 'word_masculine': 'countryman'} ``` An example from the `name_genders` config: ``` {'assigned_gender': 1, 'count': 7065, 'name': 'Mary'} ``` ### Data Fields The following are the features for each of the configs. For the `new_data` config: - `text`: the text to be classified - `original`: the text before reformulation - `labels`: a `list` of classification labels, with possible values including `ABOUT:female`, `ABOUT:male`, `PARTNER:female`, `PARTNER:male`, `SELF:female`. - `class_type`: a classification label, with possible values including `about` (0), `partner` (1), `self` (2). - `turker_gender`: a classification label, with possible values including `man` (0), `woman` (1), `nonbinary` (2), `prefer not to say` (3), `no answer` (4). - `episode_done`: a boolean indicating whether the conversation was completed. - `confidence`: a string indicating the confidence of the annotator in response to the instance label being ABOUT/TO/AS a man or woman. Possible values are `certain`, `pretty sure`, and `unsure`. For the `funpedia` config: - `text`: the text to be classified. - `gender`: a classification label, with possible values including `gender-neutral` (0), `female` (1), `male` (2), indicating the gender of the person being talked about. - `persona`: a string describing the persona assigned to the user when talking about the entity. - `title`: a string naming the entity the text is about. For the `image_chat` config: - `caption`: a string description of the contents of the original image. - `female`: a boolean indicating whether the gender of the person being talked about is female, if the image contains a person. - `id`: a string indicating the id of the image. - `male`: a boolean indicating whether the gender of the person being talked about is male, if the image contains a person. For the `wizard` config: - `text`: the text to be classified. - `chosen_topic`: a string indicating the topic of the text. - `gender`: a classification label, with possible values including `gender-neutral` (0), `female` (1), `male` (2), indicating the gender of the person being talked about. For the `_inferred` configurations (again, except the `yelp_inferred` split, which does not have the `ternary_label` or `ternary_score` fields): - `text`: the text to be classified. - `binary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`. - `binary_score`: a float indicating a score between 0 and 1. - `ternary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`, `ABOUT:gender-neutral`. - `ternary_score`: a float indicating a score between 0 and 1. For the word list: - `word_masculine`: a string indicating the masculine version of the word. - `word_feminine`: a string indicating the feminine version of the word. For the gendered name list: - `assigned_gender`: an integer, 1 for female, 0 for male. - `count`: an integer. - `name`: a string of the name. ### Data Splits The different parts of the data can be accessed through the different configurations: - `gendered_words`: A list of common nouns with a masculine and feminine variant. - `new_data`: Sentences reformulated and annotated along all three axes. - `funpedia`, `wizard`: Sentences from Funpedia and Wizards of Wikipedia annotated with ABOUT gender with entity gender information. - `image_chat`: sentences about images annotated with ABOUT gender based on gender information from the entities in the image - `convai2_inferred`, `light_inferred`, `opensubtitles_inferred`, `yelp_inferred`: Data from several source datasets with ABOUT annotations inferred by a trined classifier. | Split | M | F | N | U | Dimension | | ---------- | ---- | --- | ---- | ---- | --------- | | Image Chat | 39K | 15K | 154K | - | ABOUT | | Funpedia | 19K | 3K | 1K | - | ABOUT | | Wizard | 6K | 1K | 1K | - | ABOUT | | Yelp | 1M | 1M | - | - | AS | | ConvAI2 | 22K | 22K | - | 86K | AS | | ConvAI2 | 22K | 22K | - | 86K | TO | | OpenSub | 149K | 69K | - | 131K | AS | | OpenSub | 95K | 45K | - | 209K | TO | | LIGHT | 13K | 8K | - | 83K | AS | | LIGHT | 13K | 8K | - | 83K | TO | | ---------- | ---- | --- | ---- | ---- | --------- | | MDGender | 384 | 401 | - | - | ABOUT | | MDGender | 396 | 371 | - | - | AS | | MDGender | 411 | 382 | - | - | TO | ## Dataset Creation ### Curation Rationale The curators chose to annotate the existing corpora to make their classifiers reliable on all dimensions (ABOUT/TO/AS) and across multiple domains. However, none of the existing datasets cover all three dimensions at the same time, and many of the gender labels are noisy. To enable reliable evaluation, the curators collected a specialized corpus, found in the `new_data` config, which acts as a gold-labeled dataset for the masculine and feminine classes. ### Source Data #### Initial Data Collection and Normalization For the `new_data` config, the curators collected conversations between two speakers. Each speaker was provided with a persona description containing gender information, then tasked with adopting that persona and having a conversation. They were also provided with small sections of a biography from Wikipedia as the conversation topic in order to encourage crowdworkers to discuss ABOUT/TO/AS gender information. To ensure there is ABOUT/TO/AS gender information contained in each utterance, the curators asked a second set of annotators to rewrite each utterance to make it very clear that they are speaking ABOUT a man or a woman, speaking AS a man or a woman, and speaking TO a man or a woman. #### Who are the source language producers? This dataset was collected from crowdworkers from Amazon’s Mechanical Turk. All workers are English-speaking and located in the United States. | Reported Gender | Percent of Total | | ----------------- | ---------------- | | Man | 67.38 | | Woman | 18.34 | | Non-binary | 0.21 | | Prefer not to say | 14.07 | ### Annotations #### Annotation process For the `new_data` config, annotators were asked to label how confident they are that someone else could predict the given gender label, allowing for flexibility between explicit genderedness (like the use of "he" or "she") and statistical genderedness. Many of the annotated datasets contain cases where the ABOUT, AS, TO labels are not provided (i.e. unknown). In such instances, the curators apply one of two strategies. They apply the imputation strategy for data for which the ABOUT label is unknown using a classifier trained only on other Wikipedia data for which this label is provided. Data without a TO or AS label was assigned one at random, choosing between masculine and feminine with equal probability. Details of how each of the eight training datasets was annotated are as follows: 1. Wikipedia- to annotate ABOUT, the curators used a Wikipedia dump and extract biography pages using named entity recognition. They labeled pages with a gender based on the number of gendered pronouns (he vs. she vs. they) and labeled each paragraph in the page with this label for the ABOUT dimension. 2. Funpedia- Funpedia ([Miller et al., 2017](https://www.aclweb.org/anthology/D17-2014/)) contains rephrased Wikipedia sentences in a more conversational way. The curators retained only biography related sentences and annotate similar to Wikipedia, to give ABOUT labels. 3. Wizard of Wikipedia- [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) contains two people discussing a topic in Wikipedia. The curators retain only the conversations on Wikipedia biographies and annotate to create ABOUT labels. 4. ImageChat- [ImageChat](https://klshuster.github.io/image_chat/) contains conversations discussing the contents of an image. The curators used the [Xu et al. image captioning system](https://github.com/AaronCCWong/Show-Attend-and-Tell) to identify the contents of an image and select gendered examples. 5. Yelp- The curators used the Yelp reviewer gender predictor developed by ([Subramanian et al., 2018](https://arxiv.org/pdf/1811.00552.pdf)) and retain reviews for which the classifier is very confident – this creates labels for the content creator of the review (AS). They impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4. 6. ConvAI2- [ConvAI2](https://parl.ai/projects/convai2/) contains persona-based conversations. Many personas contain sentences such as 'I am a old woman' or 'My name is Bob' which allows annotators to annotate the gender of the speaker (AS) and addressee (TO) with some confidence. Many of the personas have unknown gender. The curators impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4. 7. OpenSubtitles- [OpenSubtitles](http://www.opensubtitles.org/) contains subtitles for movies in different languages. The curators retained English subtitles that contain a character name or identity. They annotated the character’s gender using gender kinship terms such as daughter and gender probability distribution calculated by counting the masculine and feminine names of baby names in the United States. Using the character’s gender, they produced labels for the AS dimension. They produced labels for the TO dimension by taking the gender of the next character to speak if there is another utterance in the conversation; otherwise, they take the gender of the last character to speak. They impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4. 8. LIGHT- [LIGHT](https://parl.ai/projects/light/) contains persona-based conversation. Similarly to ConvAI2, annotators labeled the gender of each persona, giving labels for the speaker (AS) and speaking partner (TO). The curators impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4. #### Who are the annotators? This dataset was annotated by crowdworkers from Amazon’s Mechanical Turk. All workers are English-speaking and located in the United States. ### Personal and Sensitive Information For privacy reasons the curators did not associate the self-reported gender of the annotator with the labeled examples in the dataset and only report these statistics in aggregate. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for applications such as controlling for gender bias in generative models, detecting gender bias in arbitrary text, and classifying text as offensive based on its genderedness. ### Discussion of Biases Over two thirds of annotators identified as men, which may introduce biases into the dataset. Wikipedia is also well known to have gender bias in equity of biographical coverage and lexical bias in noun references to women (see the paper's appendix for citations). ### Other Known Limitations The limitations of the Multi-Dimensional Gender Bias Classification dataset have not yet been investigated, but the curators acknowledge that more work is required to address the intersectionality of gender identities, i.e., when gender non-additively interacts with other identity characteristics. The curators point out that negative gender stereotyping is known to be alternatively weakened or reinforced by the presence of social attributes like dialect, class and race and that these differences have been found to affect gender classification in images and sentences encoders. See the paper for references. ## Additional Information ### Dataset Curators Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, and Adina Williams at Facebook AI Research. Angela Fan is also affiliated with Laboratoire Lorrain d’Informatique et Applications (LORIA). ### Licensing Information The Multi-Dimensional Gender Bias Classification dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT). ### Citation Information ``` @inproceedings{dinan-etal-2020-multi, title = "Multi-Dimensional Gender Bias Classification", author = "Dinan, Emily and Fan, Angela and Wu, Ledell and Weston, Jason and Kiela, Douwe and Williams, Adina", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.23", doi = "10.18653/v1/2020.emnlp-main.23", pages = "314--331", abstract = "Machine learning models are trained to find patterns in data. NLP models can inadvertently learn socially undesirable patterns when training on gender biased text. In this work, we propose a novel, general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information. In addition, we collect a new, crowdsourced evaluation benchmark. Distinguishing between gender bias along multiple dimensions enables us to train better and more fine-grained gender bias classifiers. We show our classifiers are valuable for a variety of applications, like controlling for gender bias in generative models, detecting gender bias in arbitrary text, and classifying text as offensive based on its genderedness.", } ``` ### Contributions Thanks to [@yjernite](https://github.com/yjernite) and [@mcmillanmajora](https://github.com/mcmillanmajora)for adding this dataset.
plaguss/snli-small
2023-09-10T14:53:06.000Z
[ "size_categories:n<1K", "rlfh", "argilla", "human-feedback", "region:us" ]
plaguss
null
null
null
0
563
--- size_categories: n<1K tags: - rlfh - argilla - human-feedback --- # Dataset Card for snli-small This dataset has been created with [Argilla](https://docs.argilla.io). As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Dataset Description - **Homepage:** https://argilla.io - **Repository:** https://github.com/argilla-io/argilla - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla. * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.FeedbackDataset.from_huggingface("plaguss/snli-small") ``` ### Load with `datasets` To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("plaguss/snli-small") ``` ### Supported Tasks and Leaderboards This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure). There are no leaderboards associated with this dataset. ### Languages [More Information Needed] ## Dataset Structure ### Data in Argilla The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**. The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | premise | Premise | TextField | True | False | | hypothesis | Hypothesis | TextField | True | False | The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | label | The hypothesis entails the premise, neither entails nor contradict each other, or the hypothesis contradicts the premise? | LabelQuestion | True | N/A | ['0', '1', '2'] | **✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above. Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section. ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "fields": { "hypothesis": "A person is training his horse for a competition.", "premise": "A person on a horse jumps over a broken down airplane." }, "metadata": {}, "responses": [ { "status": "submitted", "values": { "label": { "value": "1" } } } ], "suggestions": [] } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "external_id": null, "hypothesis": "A person is training his horse for a competition.", "label": [ { "status": "submitted", "user_id": null, "value": "1" } ], "label-suggestion": null, "label-suggestion-metadata": { "agent": null, "score": null, "type": null }, "metadata": "{}", "premise": "A person on a horse jumps over a broken down airplane." } ``` ### Data Fields Among the dataset fields, we differentiate between the following: * **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. * **premise** is of type `TextField`. * **hypothesis** is of type `TextField`. * **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`. * **label** is of type `LabelQuestion` with the following allowed values ['0', '1', '2']. * **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. * (optional) **label-suggestion** is of type `label_selection` with the following allowed values ['0', '1', '2']. Additionally, we also have one more field which is optional and is the following: * **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines Premise: A string used to determine the truthfulness of the hypothesis, Hypothesis: A string that may be true, false, or whose truth conditions may not be knowable when compared to the premise #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
embedding-data/sentence-compression
2022-08-02T03:02:47.000Z
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "region:us" ]
embedding-data
null
null
null
10
562
--- license: mit language: - en paperswithcode_id: embedding-data/sentence-compression pretty_name: sentence-compression task_categories: - sentence-similarity - paraphrase-mining task_ids: - semantic-similarity-classification --- # Dataset Card for "sentence-compression" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/sentence-compression](https://github.com/google-research-datasets/sentence-compression) - **Repository:** [More Information Needed](https://github.com/google-research-datasets/sentence-compression) - **Paper:** [More Information Needed](https://www.aclweb.org/anthology/D13-1155/) - **Point of Contact:** [Katja Filippova](altun@google.com) - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** 14.2 MB ### Dataset Summary Dataset with pairs of equivalent sentences. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from using the dataset. Disclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". ``` {"set": [sentence_1, sentence_2]} {"set": [sentence_1, sentence_2]} ... {"set": [sentence_1, sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/sentence-compression") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 180000 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/google-research-datasets/sentence-compression) #### Who are the source language producers? [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Annotations #### Annotation process [More Information Needed](https://github.com/google-research-datasets/sentence-compression) #### Who are the annotators? [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Personal and Sensitive Information [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Discussion of Biases [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Other Known Limitations [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Licensing Information [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Contributions
jigsaw_unintended_bias
2023-01-25T14:33:20.000Z
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc0-1.0", "toxicity-prediction", "region:us" ]
null
A collection of comments from the defunct Civil Comments platform that have been annotated for their toxicity.
null
null
2
561
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc0-1.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-classification task_ids: - text-scoring pretty_name: Jigsaw Unintended Bias in Toxicity Classification tags: - toxicity-prediction dataset_info: features: - name: target dtype: float32 - name: comment_text dtype: string - name: severe_toxicity dtype: float32 - name: obscene dtype: float32 - name: identity_attack dtype: float32 - name: insult dtype: float32 - name: threat dtype: float32 - name: asian dtype: float32 - name: atheist dtype: float32 - name: bisexual dtype: float32 - name: black dtype: float32 - name: buddhist dtype: float32 - name: christian dtype: float32 - name: female dtype: float32 - name: heterosexual dtype: float32 - name: hindu dtype: float32 - name: homosexual_gay_or_lesbian dtype: float32 - name: intellectual_or_learning_disability dtype: float32 - name: jewish dtype: float32 - name: latino dtype: float32 - name: male dtype: float32 - name: muslim dtype: float32 - name: other_disability dtype: float32 - name: other_gender dtype: float32 - name: other_race_or_ethnicity dtype: float32 - name: other_religion dtype: float32 - name: other_sexual_orientation dtype: float32 - name: physical_disability dtype: float32 - name: psychiatric_or_mental_illness dtype: float32 - name: transgender dtype: float32 - name: white dtype: float32 - name: created_date dtype: string - name: publication_id dtype: int32 - name: parent_id dtype: float32 - name: article_id dtype: int32 - name: rating dtype: class_label: names: '0': rejected '1': approved - name: funny dtype: int32 - name: wow dtype: int32 - name: sad dtype: int32 - name: likes dtype: int32 - name: disagree dtype: int32 - name: sexual_explicit dtype: float32 - name: identity_annotator_count dtype: int32 - name: toxicity_annotator_count dtype: int32 splits: - name: train num_bytes: 914264058 num_examples: 1804874 - name: test_private_leaderboard num_bytes: 49188921 num_examples: 97320 - name: test_public_leaderboard num_bytes: 49442360 num_examples: 97320 download_size: 0 dataset_size: 1012895339 --- # Dataset Card for Jigsaw Unintended Bias in Toxicity Classification ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification - **Repository:** - **Paper:** - **Leaderboard:** https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/leaderboard - **Point of Contact:** ### Dataset Summary The Jigsaw Unintended Bias in Toxicity Classification dataset comes from the eponymous Kaggle competition. Please see the original [data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data) description for more information. ### Supported Tasks and Leaderboards The main target for this dataset is toxicity prediction. Several toxicity subtypes are also available, so the dataset can be used for multi-attribute prediction. See the original [leaderboard](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/leaderboard) for reference. ### Languages English ## Dataset Structure ### Data Instances A data point consists of an id, a comment, the main target, the other toxicity subtypes as well as identity attributes. For instance, here's the first train example. ``` { "article_id": 2006, "asian": NaN, "atheist": NaN, "bisexual": NaN, "black": NaN, "buddhist": NaN, "christian": NaN, "comment_text": "This is so cool. It's like, 'would you want your mother to read this??' Really great idea, well done!", "created_date": "2015-09-29 10:50:41.987077+00", "disagree": 0, "female": NaN, "funny": 0, "heterosexual": NaN, "hindu": NaN, "homosexual_gay_or_lesbian": NaN, "identity_annotator_count": 0, "identity_attack": 0.0, "insult": 0.0, "intellectual_or_learning_disability": NaN, "jewish": NaN, "latino": NaN, "likes": 0, "male": NaN, "muslim": NaN, "obscene": 0.0, "other_disability": NaN, "other_gender": NaN, "other_race_or_ethnicity": NaN, "other_religion": NaN, "other_sexual_orientation": NaN, "parent_id": NaN, "physical_disability": NaN, "psychiatric_or_mental_illness": NaN, "publication_id": 2, "rating": 0, "sad": 0, "severe_toxicity": 0.0, "sexual_explicit": 0.0, "target": 0.0, "threat": 0.0, "toxicity_annotator_count": 4, "transgender": NaN, "white": NaN, "wow": 0 } ``` ### Data Fields - `id`: id of the comment - `target`: value between 0(non-toxic) and 1(toxic) classifying the comment - `comment_text`: the text of the comment - `severe_toxicity`: value between 0(non-severe_toxic) and 1(severe_toxic) classifying the comment - `obscene`: value between 0(non-obscene) and 1(obscene) classifying the comment - `identity_attack`: value between 0(non-identity_hate) or 1(identity_hate) classifying the comment - `insult`: value between 0(non-insult) or 1(insult) classifying the comment - `threat`: value between 0(non-threat) and 1(threat) classifying the comment - For a subset of rows, columns containing whether the comment mentions the entities (they may contain NaNs): - `male` - `female` - `transgender` - `other_gender` - `heterosexual` - `homosexual_gay_or_lesbian` - `bisexual` - `other_sexual_orientation` - `christian` - `jewish` - `muslim` - `hindu` - `buddhist` - `atheist` - `other_religion` - `black` - `white` - `asian` - `latino` - `other_race_or_ethnicity` - `physical_disability` - `intellectual_or_learning_disability` - `psychiatric_or_mental_illness` - `other_disability` - Other metadata related to the source of the comment, such as creation date, publication id, number of likes, number of annotators, etc: - `created_date` - `publication_id` - `parent_id` - `article_id` - `rating` - `funny` - `wow` - `sad` - `likes` - `disagree` - `sexual_explicit` - `identity_annotator_count` - `toxicity_annotator_count` ### Data Splits There are four splits: - train: The train dataset as released during the competition. Contains labels and identity information for a subset of rows. - test: The train dataset as released during the competition. Does not contain labels nor identity information. - test_private_expanded: The private leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold. - test_public_expanded: The public leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold. ## Dataset Creation ### Curation Rationale The dataset was created to help in efforts to identify and curb instances of toxicity online. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This dataset is released under CC0, as is the underlying comment text. ### Citation Information No citation is available for this dataset, though you may link to the [kaggle](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) competition ### Contributions Thanks to [@iwontbecreative](https://github.com/iwontbecreative) for adding this dataset.
ehartford/wizard_vicuna_70k_unfiltered
2023-05-16T00:43:23.000Z
[ "license:apache-2.0", "region:us" ]
ehartford
null
null
null
97
561
--- license: apache-2.0 --- This dataset is the wizard_vicuna dataset junelee/wizard_vicuna_70k, removing conversations with alignment. 34598 conversations remain. inspired by https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered All credit to anon8231489123 I basically took his scripts and applied them to this new dataset.
BeIR/nq
2022-10-23T06:02:24.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
null
2
560
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
Yijia-Xiao/pii-wikidoc
2023-09-12T22:24:36.000Z
[ "region:us" ]
Yijia-Xiao
null
null
null
1
560
--- dataset_info: features: - name: output dtype: string - name: input dtype: string - name: instruction dtype: string - name: cleaned_output dtype: string splits: - name: train num_bytes: 19486545 num_examples: 10000 download_size: 10662804 dataset_size: 19486545 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "pii-wikidoc" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
IlyaGusev/gazeta
2023-02-12T00:01:45.000Z
[ "task_categories:summarization", "annotations_creators:expert-generated", "annotations_creators:found", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ru", "license:unknown", "arxiv:2006.11063", "region:us" ]
IlyaGusev
null
@InProceedings{10.1007/978-3-030-59082-6_9, author="Gusev, Ilya", editor="Filchenkov, Andrey and Kauttonen, Janne and Pivovarova, Lidia", title="Dataset for Automatic Summarization of Russian News", booktitle="Artificial Intelligence and Natural Language", year="2020", publisher="Springer International Publishing", address="Cham", pages="122--134", isbn="978-3-030-59082-6" }
null
13
559
--- annotations_creators: - expert-generated - found language_creators: - expert-generated - found task_categories: - summarization language: - ru size_categories: - 10K<n<100K license: - unknown multilinguality: - monolingual source_datasets: - original paperswithcode_id: gazeta --- # Dataset Card for Gazeta ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/IlyaGusev/gazeta - **Paper:** [Dataset for Automatic Summarization of Russian News](https://arxiv.org/abs/2006.11063) - **Leaderboard:** https://paperswithcode.com/sota/text-summarization-on-gazeta - **Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu) ### Dataset Summary Dataset for automatic summarization of Russian news. News and their summaries are from the Gazeta website. Summaries were parsed as the content of an HTML tag with “description” property. Additional selection of good summaries was performed. There are two versions of this dataset. ### Supported Tasks and Leaderboards Leaderboard on Papers With Code: [text-summarization-on-gazeta](https://paperswithcode.com/sota/text-summarization-on-gazeta). Please use the original [evaluation script](https://github.com/IlyaGusev/summarus/blob/master/evaluate.py) with the same parameters. Example: ``` python3 evaluate.py --predicted-path predictions.txt --gold-path targets.txt --language ru --tokenize-after --lower ``` ### Languages The dataset is in Russian. ### Usage Loading version 1.0: ```python from datasets import load_dataset dataset = load_dataset('IlyaGusev/gazeta', revision="v1.0") ``` Loading version 2.0: ```python from datasets import load_dataset dataset = load_dataset('IlyaGusev/gazeta', revision="v2.0") ``` ### Other datasets Other Russian summarization datasets: * Russian part of [XL-Sum](https://huggingface.co/datasets/csebuetnlp/xlsum), parsed from www.bbc.com/russian, 77803 samples * Russian part of [MLSUM](https://huggingface.co/datasets/mlsum), parsed from www.mk.ru, 27063 samples ## Dataset Structure ### Data Instances For each instance, there is a string for the article, a string for the summary, and a string for the url. Additionally, a string for the title and a date are provided. ``` { 'date': '2019-10-01 15:14:05', 'url': 'https://www.gazeta.ru/tech/2019/10/01/12698923/whatsapp_pls.shtml', 'title': 'На последнем издыхании: у кого отключится WhatsApp', 'summary': 'Мессенджер WhatsApp перестанет работать на ряде смартфонов — речь идет о гаджетах на базе операционных систем Android 2.3.7 и iOS 8, которые считаются устаревшими. В компании отмечают, что сервис на этих устройствах может отключиться в любой момент, поэтому будет целесообразно сменить устройство либо обновить ОС.', 'text': 'На официальном сайте мессенджера WhatsApp появилось сообщение о том, что с 1 февраля 2020 года сервис прекратит свою работу на некоторых устаревших смартфонах. Речь идет об устройствах, работающих на базе операционных систем Android 2.3.7 и iOS 8. При этом руководство WhatsApp предупреждает, что даже до обозначенного выше дедлайна функционал мессенджера на этих ОС может быть ограничен. «В связи с тем, что мы не планируем обновлять данные операционные системы, некоторые функции могут перестать работать на них в любое время», — говорится в пресс-релизе компании. Чтобы сохранить возможность пользоваться мессенджером без проблем, следует обновить версию прошивки или приобрести новое, более современное устройство. Сообщается, что на старых версиях операционных систем уже не получится завести новый аккаунт WhatsApp или верифицировать уже существующий. При этом в WhatsApp порекомендовали пользоваться устройствами с Android 4.0.3 и более поздними версиями, а также iOS 9 и более поздними версиями. Ранее стало известно о том, что с 31 декабря 2019 года WhatsApp прекращает поддержку устройств на базе операционной системы Windows Phone, от разработки которой пришлось отказаться. Впрочем, если верить статистике , эти меры вряд ли затронут большое количество пользователей. По состоянию на май 2019 года лишь 0,3% всех владельцев Android все еще пользуются ОС версий 2.3.3–2.3.7. Что же касается iOS, то версия под номером «10» или старше установлена на 5% устройств Apple. Как уже упоминалось выше, выпуск новых гаджетов на Windows Phone и вовсе прекращен ее создателем. В середине сентября экс-сотрудник АНБ Эдвард Сноуден раскритиковал WhatsApp за несовершенную систему защиты, порекомендовав политикам пользоваться другими средствами связи. Журналист французской радиостанции France Inter отметил, что президент Франции Эмманюэль Макрон для связи использует Telegram, а премьер-министр страны Эдуар Филипп — WhatsApp. Сноуден назвал такое решение «большой ошибкой», учитывая серьезные посты, которые занимают Макрон и Филипп. По словам Сноудена, эти сервисы безопаснее обычных SMS-сообщений, но все еще «чрезвычайно опасны, если вы премьер-министр». Больше всего претензий у информатора к WhatsApp, который стал частью активов корпорации Facebook в 2014 году. Эдвард Сноуден отметил, что после приобретения мессенджера Facebook «слой за слоем» снимает различные уровни защиты сервиса, чтобы при необходимости читать переписку своих пользователей. Ранее с критикой в адрес WhatsApp выступил и глава Telegram Павел Дуров. По словам предпринимателя, после устранения одной «дыры» в мессенджере тут же появляются новые. «Все выявленные проблемы позволяют вести слежку, выглядят и функционируют как бэкдоры», — заявил Дуров. При этом Дуров подчеркнул, что WhatsApp мог быть вынужден установить бэкдоры по указанию ФБР. В июне руководство WhatsApp заявило о том, что их сервис готов судиться с юзерами за нарушение правил пользования. В список нарушений входит использование программы «не в личных целях» и применение автоматической рассылки сообщений. По данным пресс-службы WhatsApp, уже сейчас обнаружены и заморожены «миллионы аккаунтов», пойманных на «злоупотреблении». «Наша платформа изначально создавалась, чтобы помогать людям общаться с их друзьями и любимыми... Используя информацию приложения, мы нашли и заблокировали миллионы злоупотребляющих аккаунтов от использования нашей сети», – заявили в WhatsApp. В частности, нарушение происходит, если компания публично заявляет о возможности использовать WhatsApp, нарушая при этом правила пользования мессенджером. «Ничто в этом объявлении не ограничивает право WhatsApp от применения своих условий с использованием технологий. Классификаторы на основе machine learning нам в этом помогают, и мы продолжим их использовать», – добавили в команде приложения.', } ``` Some dataset statistics are below: | Feature | Mean Token Count | Mean Sentence Count | |:---------|:---------|--------------------------------------------------| | Text | 767 | 37 | | Summary | 50 | 3 | ### Data Splits | Dataset Split | v1, Number of Instances in Split | v2, Number of Instances in Split | |:---------|:---------|:---------| | Train | 52,400 | 60,964 | | Validation | 5,265 | 6,369 | | Test | 5,770 | 6,793 | ## Dataset Creation ### Curation Rationale When the first version of the dataset was collected, there were no other datasets for Russian text summarization. Even now, it is one of the few datasets for this task. ### Source Data #### Initial Data Collection and Normalization * The source of data is the [Gazeta](https://www.gazeta.ru/) website. * Parsing scripts are [here](https://github.com/IlyaGusev/gazeta/tree/master/parser). * Cleaning and normalization Colab notebook is [here](https://colab.research.google.com/drive/1Ed_chVrslp_7vJNS3PmRC0_ZJrRQYv0C) #### Who are the source language producers? Texts and summaries were written by journalists at [Gazeta](https://www.gazeta.ru/). ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases It is a dataset from a single source. Thus it has a constrained text style and event perspective. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The data was collected by Ilya Gusev. ### Licensing Information Legal basis for distribution of the dataset: https://www.gazeta.ru/credits.shtml, paragraph 2.1.2. All rights belong to "www.gazeta.ru". Usage of this dataset is possible only for personal purposes on a non-commercial basis. ### Citation Information ```bibtex @InProceedings{10.1007/978-3-030-59082-6_9, author="Gusev, Ilya", editor="Filchenkov, Andrey and Kauttonen, Janne and Pivovarova, Lidia", title="Dataset for Automatic Summarization of Russian News", booktitle="Artificial Intelligence and Natural Language", year="2020", publisher="Springer International Publishing", address="Cham", pages="122--134", isbn="978-3-030-59082-6" } ``` ### Contributions [N/A]
result-kand2-sdxl-wuerst-karlo/04554133
2023-09-21T19:46:51.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
558
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 167 num_examples: 10 download_size: 1328 dataset_size: 167 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "04554133" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
embedding-data/QQP_triplets
2022-08-02T03:14:14.000Z
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "region:us" ]
embedding-data
null
null
null
3
554
--- license: mit language: - en paperswithcode_id: embedding-data/QQP_triplets pretty_name: QQP_triplets task_categories: - sentence-similarity - paraphrase-mining task_ids: - semantic-similarity-classification --- # Dataset Card for "QQP_triplets" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - **Repository:** [More Information Needed](http://qim.fs.quoracdn.net/quora_duplicate_questions.tsv) - **Paper:** [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - **Point of Contact:** [Kornél Csernai](https://www.quora.com/profile/Korn%C3%A9l-Csernai), [Nikhil Dandekar](https://www.quora.com/profile/Nikhil-Dandekar), [Shankar Iyer](https://www.quora.com/profile/Shankar-Iyer-5) ### Dataset Summary This dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. The data is organized as triplets (anchor, positive, negative). Disclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example is a dictionary with three keys (query, pos, and neg) containing a list each (triplets). The first key contains an anchor sentence, the second a positive sentence, and the third a list of negative sentences. ``` {"query": [anchor], "pos": [positive], "neg": [negative1, negative2, ..., negativeN]} {"query": [anchor], "pos": [positive], "neg": [negative1, negative2, ..., negativeN]} ... {"query": [anchor], "pos": [positive], "neg": [negative1, negative2, ..., negativeN]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train them. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/QQP_triplets") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 101762 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) #### Who are the source language producers? [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Annotations #### Annotation process [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) #### Who are the annotators? [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Personal and Sensitive Information [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Discussion of Biases [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Other Known Limitations Here are a few important things to keep in mind about this dataset: - Our original sampling method returned an imbalanced dataset with many more true examples of duplicate pairs than non-duplicates. Therefore, we supplemented the dataset with negative examples. - One source of negative examples were pairs of “related questions” which, although pertaining to similar topics, are not truly semantically equivalent. - The distribution of questions in the dataset should not be taken to be representative of the distribution of questions asked on Quora. This is, in part, because of the combination of sampling procedures and also due to some sanitization measures that have been applied to the final dataset (e.g., removal of questions with extremely long question details). - The ground-truth labels contain some amount of noise: they are not guaranteed to be perfect. ## Additional Information ### Dataset Curators [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Licensing Information [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Citation Information [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Contributions Thanks to [Kornél Csernai](https://www.quora.com/profile/Korn%C3%A9l-Csernai), [Nikhil Dandekar](https://www.quora.com/profile/Nikhil-Dandekar), [Shankar Iyer](https://www.quora.com/profile/Shankar-Iyer-5) for adding this dataset.
cardiffnlp/tweet_topic_multi
2022-11-27T11:26:34.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "arxiv:2209.09824", "region:us" ]
cardiffnlp
[TweetTopic](https://arxiv.org/abs/2209.09824)
@inproceedings{dimosthenis-etal-2022-twitter, title = "{T}witter {T}opic {C}lassification", author = "Antypas, Dimosthenis and Ushio, Asahi and Camacho-Collados, Jose and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics" }
null
8
554
--- language: - en license: - other multilinguality: - monolingual size_categories: - 1k<10K task_categories: - text-classification task_ids: - sentiment-classification pretty_name: TweetTopicSingle --- # Dataset Card for "cardiffnlp/tweet_topic_multi" ## Dataset Description - **Paper:** [https://arxiv.org/abs/2209.09824](https://arxiv.org/abs/2209.09824) - **Dataset:** Tweet Topic Dataset - **Domain:** Twitter - **Number of Class:** 19 ### Dataset Summary This is the official repository of TweetTopic (["Twitter Topic Classification , COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 19 labels. Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021. See [cardiffnlp/tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single) for single label version of TweetTopic. The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7). The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too. ### Preprocessing We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`. For verified usernames, we replace its display name (or account name) with symbols `{@}`. For example, a tweet ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek ``` is transformed into the following text. ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}} ``` A simple function to format tweet follows below. ```python import re from urlextract import URLExtract extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek""" target_format = format_tweet(target) print(target_format) 'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}' ``` ### Data Splits | split | number of texts | description | |:------------------------|-----:|------:| | test_2020 | 573 | test dataset from September 2019 to August 2020 | | test_2021 | 1679 | test dataset from September 2020 to August 2021 | | train_2020 | 4585 | training dataset from September 2019 to August 2020 | | train_2021 | 1505 | training dataset from September 2020 to August 2021 | | train_all | 6090 | combined training dataset of `train_2020` and `train_2021` | | validation_2020 | 573 | validation dataset from September 2019 to August 2020 | | validation_2021 | 188 | validation dataset from September 2020 to August 2021 | | train_random | 4564 | randomly sampled training dataset with the same size as `train_2020` from `train_all` | | validation_random | 573 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` | | test_coling2022_random | 5536 | random split used in the COLING 2022 paper | | train_coling2022_random | 5731 | random split used in the COLING 2022 paper | | test_coling2022 | 5536 | temporal split used in the COLING 2022 paper | | train_coling2022 | 5731 | temporal split used in the COLING 2022 paper | For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`. In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`. **IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set). ### Models | model | training data | F1 | F1 (macro) | Accuracy | |:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:| | [cardiffnlp/roberta-large-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-all) | all (2020 + 2021) | 0.763104 | 0.620257 | 0.536629 | | [cardiffnlp/roberta-base-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-all) | all (2020 + 2021) | 0.751814 | 0.600782 | 0.531864 | | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all) | all (2020 + 2021) | 0.762513 | 0.603533 | 0.547945 | | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all) | all (2020 + 2021) | 0.759917 | 0.59901 | 0.536033 | | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all) | all (2020 + 2021) | 0.764767 | 0.618702 | 0.548541 | | [cardiffnlp/roberta-large-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-2020) | 2020 only | 0.732366 | 0.579456 | 0.493746 | | [cardiffnlp/roberta-base-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-2020) | 2020 only | 0.725229 | 0.561261 | 0.499107 | | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020) | 2020 only | 0.73671 | 0.565624 | 0.513401 | | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020) | 2020 only | 0.729446 | 0.534799 | 0.50268 | | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020) | 2020 only | 0.731106 | 0.532141 | 0.509827 | Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py). ## Dataset Structure ### Data Instances An example of `train` looks as follows. ```python { "date": "2021-03-07", "text": "The latest The Movie theater Daily! {{URL}} Thanks to {{USERNAME}} {{USERNAME}} {{USERNAME}} #lunchtimeread #amc1000", "id": "1368464923370676231", "label": [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "label_name": ["film_tv_&_video"] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweet_topic_multi/raw/main/dataset/label.multi.json). ```python { "arts_&_culture": 0, "business_&_entrepreneurs": 1, "celebrity_&_pop_culture": 2, "diaries_&_daily_life": 3, "family": 4, "fashion_&_style": 5, "film_tv_&_video": 6, "fitness_&_health": 7, "food_&_dining": 8, "gaming": 9, "learning_&_educational": 10, "music": 11, "news_&_social_concern": 12, "other_hobbies": 13, "relationships": 14, "science_&_technology": 15, "sports": 16, "travel_&_adventure": 17, "youth_&_student_life": 18 } ``` ### Citation Information ``` @inproceedings{dimosthenis-etal-2022-twitter, title = "{T}witter {T}opic {C}lassification", author = "Antypas, Dimosthenis and Ushio, Asahi and Camacho-Collados, Jose and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics" } ```
llm-book/aio-retriever
2023-07-04T04:56:01.000Z
[ "size_categories:10K<n<100K", "language:ja", "region:us" ]
llm-book
null
null
null
0
554
--- language: - ja size_categories: - 10K<n<100K dataset_info: features: - name: qid dtype: string - name: competition dtype: string - name: timestamp dtype: string - name: section dtype: string - name: number dtype: string - name: original_question dtype: string - name: original_answer dtype: string - name: original_additional_info dtype: string - name: question dtype: string - name: answers list: string - name: passages list: - name: passage_id dtype: int32 - name: title dtype: string - name: text dtype: string - name: positive_passage_indices list: int32 - name: negative_passage_indices list: int32 splits: - name: train num_bytes: 1742881639 num_examples: 22335 - name: validation num_bytes: 78671502 num_examples: 1000 download_size: 665253451 dataset_size: 1821553141 --- # Dataset Card for llm-book/aio-retriever 書籍『大規模言語モデル入門』で使用する、「AI王」コンペティションのQAデータセット(文書検索モデル訓練用)です。 GitHub リポジトリ [cl-tohoku/quiz-datasets](https://github.com/cl-tohoku/quiz-datasets) で公開されているデータセットを利用しています。 ## Licence 本データセットに含まれる一部のクイズ問題の著作権は [abc/EQIDEN 実行委員会](https://abc-dive.com/portal/)に帰属するものであり、これらのクイズ問題は本書における使用許諾を得ているものです。 本データセットに含まれる一部のクイズ問題は[株式会社キュービック](http://www.qbik.co.jp/)および[株式会社カプリティオ](https://capriccio.tokyo/)に依頼し作成したものであり、これらのクイズ問題は[クリエイティブ・コモンズ表示・継承ライセンス 4.0 (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/deed.ja) ライセンスの下に提供されています。 本データセットにパッセージとして付与されている Wikipedia のコンテンツは、[クリエイティブ・コモンズ表示・継承ライセンス 3.0 (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) および [GNU 自由文書ライセンス (GFDL)](https://www.gnu.org/licenses/fdl.html) の下に配布されているものです。 クイズ問題のライセンスについて、詳細は [cl-tohoku/quiz-datasets](https://github.com/cl-tohoku/quiz-datasets) を参照してください。
cyrilzhang/wiki-bpe-32k
2023-09-22T16:02:48.000Z
[ "region:us" ]
cyrilzhang
null
null
null
0
554
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: input_ids sequence: int32 splits: - name: train num_bytes: 21123228700 num_examples: 5152007 - name: test num_bytes: 212326700 num_examples: 51787 download_size: 10331372531 dataset_size: 21335555400 --- # Dataset Card for "wiki-bpe-32k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TheFusion21/PokemonCards
2022-11-21T18:28:25.000Z
[ "task_categories:text-to-image", "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "region:us" ]
TheFusion21
null
null
null
6
553
--- annotations_creators: - machine-generated language: - en language_creators: - found license: - cc-by-nc-4.0 multilinguality: - monolingual pretty_name: Pokemoncards size_categories: - 10K<n<100K source_datasets: - original tags: [] task_categories: - text-to-image - image-to-text task_ids: - image-captioning --- # Dataset Card for PokemonCards ### Languages All of the data is in English. ## Dataset Structure ### Data Instances ```json { "id": "pl1-1", "image_url": "https://images.pokemontcg.io/pl1/1_hires.png", "caption": "A Stage 2 Pokemon Card of type Lightning with the title ""Ampharos"" and 130 HP of rarity ""Rare Holo"" evolved from Flaaffy from the set Platinum and the flavor text: ""None"". It has the attack ""Gigavolt"" with the cost Lightning, Colorless, the energy cost 2 and the damage of 30+ with the description: ""Flip a coin. If heads, this attack does 30 damage plus 30 more damage. If tails, the Defending Pokemon is now Paralyzed."". It has the attack ""Reflect Energy"" with the cost Lightning, Colorless, Colorless, the energy cost 3 and the damage of 70 with the description: ""Move an Energy card attached to Ampharos to 1 of your Benched Pokemon."". It has the ability ""Damage Bind"" with the description: ""Each Pokemon that has any damage counters on it (both yours and your opponent's) can't use any Poke-Powers."". It has weakness against Fighting +30. It has resistance against Metal -20.", "name": "Ampharos", "hp": "130", "set_name": "Platinum" } ``` ### Data Fields - `id`: Unique ID of the pokemon card. - `image_url`: Static URL for downloading the image associated with the post. - `caption`: Caption generated for this card. - `name`: Name of the pokemon on that card. - `hp`: Health of the pokemon. - `set_name`: The name of the set the card is in. ### Data Splits All the data is contained in training set. The training set has nearly 13k instances. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions
gamy0315/mixatis_clean
2023-07-19T07:41:46.000Z
[ "region:us" ]
gamy0315
null
null
null
0
551
--- dataset_info: features: - name: token sequence: string - name: tag sequence: string - name: intent sequence: string splits: - name: train num_bytes: 6266669 num_examples: 13162 - name: validation num_bytes: 334004 num_examples: 759 - name: test num_bytes: 341726 num_examples: 828 download_size: 701391 dataset_size: 6942399 --- # Dataset Card for "mixatis_clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sberquad
2023-08-29T12:35:15.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ru", "license:unknown", "arxiv:1912.09723", "region:us" ]
null
Sber Question Answering Dataset (SberQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. Russian original analogue presented in Sberbank Data Science Journey 2017.
@article{Efimov_2020, title={SberQuAD – Russian Reading Comprehension Dataset: Description and Analysis}, ISBN={9783030582197}, ISSN={1611-3349}, url={http://dx.doi.org/10.1007/978-3-030-58219-7_1}, DOI={10.1007/978-3-030-58219-7_1}, journal={Experimental IR Meets Multilinguality, Multimodality, and Interaction}, publisher={Springer International Publishing}, author={Efimov, Pavel and Chertok, Andrey and Boytsov, Leonid and Braslavski, Pavel}, year={2020}, pages={3–15} }
null
10
550
--- annotations_creators: - crowdsourced language_creators: - found - crowdsourced language: - ru license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: sberquad pretty_name: SberQuAD dataset_info: config_name: sberquad features: - name: id dtype: int32 - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: train num_bytes: 71631541 num_examples: 45328 - name: validation num_bytes: 7972953 num_examples: 5036 - name: test num_bytes: 36397776 num_examples: 23936 download_size: 10491714 dataset_size: 116002270 --- # Dataset Card for sberquad ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/sberbank-ai/data-science-journey-2017 - **Paper:** https://arxiv.org/abs/1912.09723 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Sber Question Answering Dataset (SberQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. Russian original analogue presented in Sberbank Data Science Journey 2017. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Russian ## Dataset Structure ### Data Instances ``` { "context": "Первые упоминания о строении человеческого тела встречаются в Древнем Египте...", "id": 14754, "qas": [ { "id": 60544, "question": "Где встречаются первые упоминания о строении человеческого тела?", "answers": [{"answer_start": 60, "text": "в Древнем Египте"}], } ] } ``` ### Data Fields - id: a int32 feature - title: a string feature - context: a string feature - question: a string feature - answers: a dictionary feature containing: - text: a string feature - answer_start: a int32 feature ### Data Splits | name |train |validation|test | |----------|-----:|---------:|-----| |plain_text|45328 | 5036 |23936| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` @InProceedings{sberquad, doi = {10.1007/978-3-030-58219-7_1}, author = {Pavel Efimov and Andrey Chertok and Leonid Boytsov and Pavel Braslavski}, title = {SberQuAD -- Russian Reading Comprehension Dataset: Description and Analysis}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction}, year = {2020}, publisher = {Springer International Publishing}, pages = {3--15} } ``` ### Contributions Thanks to [@alenusch](https://github.com/Alenush) for adding this dataset.
SetFit/SentEval-CR
2022-06-21T09:14:00.000Z
[ "region:us" ]
SetFit
null
null
null
2
550
# SentEval Customer Reviews This dataset is a port of the official [SentEval `CR` dataset](https://nlp.stanford.edu/~sidaw/home/projects:nbsvm) from [this paper](https://dl.acm.org/doi/10.1145/1014052.1014073). The test split was created from the by randomly sampling 20% of the original data and the train split is the remaining 80%. there are no official train/test splits of CR. There is no validation split. This was used in the STraTA paper.
israfelsr/mm_tiny_imagenet
2022-12-16T11:19:54.000Z
[ "region:us" ]
israfelsr
null
null
null
1
550
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': n01443537 '1': n01629819 '2': n01641577 '3': n01644900 '4': n01698640 '5': n01742172 '6': n01768244 '7': n01770393 '8': n01774384 '9': n01774750 '10': n01784675 '11': n01882714 '12': n01910747 '13': n01917289 '14': n01944390 '15': n01950731 '16': n01983481 '17': n01984695 '18': n02002724 '19': n02056570 '20': n02058221 '21': n02074367 '22': n02094433 '23': n02099601 '24': n02099712 '25': n02106662 '26': n02113799 '27': n02123045 '28': n02123394 '29': n02124075 '30': n02125311 '31': n02129165 '32': n02132136 '33': n02165456 '34': n02226429 '35': n02231487 '36': n02233338 '37': n02236044 '38': n02268443 '39': n02279972 '40': n02281406 '41': n02321529 '42': n02364673 '43': n02395406 '44': n02403003 '45': n02410509 '46': n02415577 '47': n02423022 '48': n02437312 '49': n02480495 '50': n02481823 '51': n02486410 '52': n02504458 '53': n02509815 '54': n02666347 '55': n02669723 '56': n02699494 '57': n02769748 '58': n02788148 '59': n02791270 '60': n02793495 '61': n02795169 '62': n02802426 '63': n02808440 '64': n02814533 '65': n02814860 '66': n02815834 '67': n02823428 '68': n02837789 '69': n02841315 '70': n02843684 '71': n02883205 '72': n02892201 '73': n02909870 '74': n02917067 '75': n02927161 '76': n02948072 '77': n02950826 '78': n02963159 '79': n02977058 '80': n02988304 '81': n03014705 '82': n03026506 '83': n03042490 '84': n03085013 '85': n03089624 '86': n03100240 '87': n03126707 '88': n03160309 '89': n03179701 '90': n03201208 '91': n03255030 '92': n03355925 '93': n03373237 '94': n03388043 '95': n03393912 '96': n03400231 '97': n03404251 '98': n03424325 '99': n03444034 '100': n03447447 '101': n03544143 '102': n03584254 '103': n03599486 '104': n03617480 '105': n03637318 '106': n03649909 '107': n03662601 '108': n03670208 '109': n03706229 '110': n03733131 '111': n03763968 '112': n03770439 '113': n03796401 '114': n03814639 '115': n03837869 '116': n03838899 '117': n03854065 '118': n03891332 '119': n03902125 '120': n03930313 '121': n03937543 '122': n03970156 '123': n03977966 '124': n03980874 '125': n03983396 '126': n03992509 '127': n04008634 '128': n04023962 '129': n04070727 '130': n04074963 '131': n04099969 '132': n04118538 '133': n04133789 '134': n04146614 '135': n04149813 '136': n04179913 '137': n04251144 '138': n04254777 '139': n04259630 '140': n04265275 '141': n04275548 '142': n04285008 '143': n04311004 '144': n04328186 '145': n04356056 '146': n04366367 '147': n04371430 '148': n04376876 '149': n04398044 '150': n04399382 '151': n04417672 '152': n04456115 '153': n04465666 '154': n04486054 '155': n04487081 '156': n04501370 '157': n04507155 '158': n04532106 '159': n04532670 '160': n04540053 '161': n04560804 '162': n04562935 '163': n04596742 '164': n04598010 '165': n06596364 '166': n07056680 '167': n07583066 '168': n07614500 '169': n07615774 '170': n07646821 '171': n07647870 '172': n07657664 '173': n07695742 '174': n07711569 '175': n07715103 '176': n07720875 '177': n07749582 '178': n07753592 '179': n07768694 '180': n07871810 '181': n07873807 '182': n07875152 '183': n07920052 '184': n07975909 '185': n08496334 '186': n08620881 '187': n08742578 '188': n09193705 '189': n09246464 '190': n09256479 '191': n09332890 '192': n09428293 '193': n12267677 '194': n12520864 '195': n13001041 '196': n13652335 '197': n13652994 '198': n13719102 '199': n14991210 - name: caption dtype: string - name: label_name dtype: string splits: - name: train num_bytes: 159978960.0 num_examples: 80000 - name: validation num_bytes: 40004701.0 num_examples: 20000 download_size: 149059401 dataset_size: 199983661.0 --- # Dataset Card for "mm_tiny_imagenet" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mc_taco
2023-01-25T14:40:09.000Z
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv:1909.03065", "region:us" ]
null
MC-TACO (Multiple Choice TemporAl COmmonsense) is a dataset of 13k question-answer pairs that require temporal commonsense comprehension. A system receives a sentence providing context information, a question designed to require temporal commonsense knowledge, and multiple candidate answers. More than one candidate answer can be plausible. The task is framed as binary classification: givent he context, the question, and the candidate answer, the task is to determine whether the candidate answer is plausible ("yes") or not ("no").
@inproceedings{ZKNR19, author = {Ben Zhou, Daniel Khashabi, Qiang Ning and Dan Roth}, title = {“Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding }, booktitle = {EMNLP}, year = {2019}, }
null
0
547
--- annotations_creators: - crowdsourced - machine-generated language_creators: - crowdsourced - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - multiple-choice-qa paperswithcode_id: mc-taco pretty_name: MC-TACO dataset_info: features: - name: sentence dtype: string - name: question dtype: string - name: answer dtype: string - name: label dtype: class_label: names: '0': 'no' '1': 'yes' - name: category dtype: class_label: names: '0': Event Duration '1': Event Ordering '2': Frequency '3': Typical Time '4': Stationarity config_name: plain_text splits: - name: test num_bytes: 1785553 num_examples: 9442 - name: validation num_bytes: 713023 num_examples: 3783 download_size: 2385137 dataset_size: 2498576 --- # Dataset Card for MC-TACO ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [MC-TACO](https://cogcomp.seas.upenn.edu/page/resource_view/125) - **Repository:** [Github repository](https://github.com/CogComp/MCTACO) - **Paper:** ["Going on a vacation" takes longer than "Going for a walk": A Study of Temporal Commonsense Understanding](https://arxiv.org/abs/1909.03065) - **Leaderboard:** [AI2 Leaderboard](https://leaderboard.allenai.org/mctaco) ### Dataset Summary MC-TACO (Multiple Choice TemporAl COmmonsense) is a dataset of 13k question-answer pairs that require temporal commonsense comprehension. A system receives a sentence providing context information, a question designed to require temporal commonsense knowledge, and multiple candidate answers. More than one candidate answer can be plausible. ### Supported Tasks and Leaderboards The task is framed as binary classification: givent he context, the question, and the candidate answer, the task is to determine whether the candidate answer is plausible ("yes") or not ("no"). Performance is measured using two metrics: - Exact Match -- the average number of questions for which all the candidate answers are predicted correctly. - F1 -- is slightly more relaxed than EM. It measures the overlap between one’s predictions and the ground truth, by computing the geometric mean of Precision and Recall. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances An example looks like this: ``` { "sentence": "However, more recently, it has been suggested that it may date from earlier than Abdalonymus' death.", "question": "How often did Abdalonymus die?", "answer": "every two years", "label": "no", "category": "Frequency", } ``` ### Data Fields All fields are strings: - `sentence`: a sentence (or context) on which the question is based - `question`: a question querying some temporal commonsense knowledge - `answer`: a potential answer to the question (all lowercased) - `label`: whether the answer is a correct. "yes" indicates the answer is correct/plaussible, "no" otherwise - `category`: the temporal category the question belongs to (among "Event Ordering", "Event Duration", "Frequency", "Stationarity", and "Typical Time") ### Data Splits The development set contains 561 questions and 3,783 candidate answers. The test set contains 1,332 questions and 9,442 candidate answers. From the original repository: *Note that there is no training data, and we provide the dev set as the only source of supervision. The rationale is that we believe a successful system has to bring in a huge amount of world knowledge and derive commonsense understandings prior to the current task evaluation. We therefore believe that it is not reasonable to expect a system to be trained solely on this data, and we think of the development data as only providing a definition of the task.* ## Dataset Creation ### Curation Rationale MC-TACO is used as a testbed to study the temporal commonsense understanding on NLP systems. ### Source Data From the original paper: *The context sentences are randomly selected from [MultiRC](https://www.aclweb.org/anthology/N18-1023/) (from each of its 9 domains). For each sentence, we use crowdsourcing on Amazon Mechanical Turk to collect questions and candidate answers (both correct and wrong ones).* #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations From the original paper: *To ensure the quality of the results, we limit the annotations to native speakers and use qualification tryouts.* #### Annotation process The crowdsourced construction/annotation of the dataset follows 4 steps described in Section 3 of the [paper](https://arxiv.org/abs/1909.03065): question generation, question verification, candidate answer expansion and answer labeling. #### Who are the annotators? Paid crowdsourcers. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknwon ### Citation Information ``` @inproceedings{ZKNR19, author = {Ben Zhou, Daniel Khashabi, Qiang Ning and Dan Roth}, title = {“Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding }, booktitle = {EMNLP}, year = {2019}, } ``` ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
climate_fever
2023-03-16T14:57:07.000Z
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_ids:text-scoring", "task_ids:fact-checking", "task_ids:fact-checking-retrieval", "task_ids:semantic-similarity-scoring", "task_ids:multi-input-text-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|wikipedia", "source_datasets:original", "language:en", "license:unknown", "arxiv:2012.00614", "region:us" ]
null
A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present.
@misc{diggelmann2020climatefever, title={CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims}, author={Thomas Diggelmann and Jordan Boyd-Graber and Jannis Bulian and Massimiliano Ciaramita and Markus Leippold}, year={2020}, eprint={2012.00614}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
8
545
--- annotations_creators: - crowdsourced - expert-generated language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|wikipedia - original task_categories: - text-classification - text-retrieval task_ids: - text-scoring - fact-checking - fact-checking-retrieval - semantic-similarity-scoring - multi-input-text-classification paperswithcode_id: climate-fever pretty_name: ClimateFever dataset_info: features: - name: claim_id dtype: string - name: claim dtype: string - name: claim_label dtype: class_label: names: '0': SUPPORTS '1': REFUTES '2': NOT_ENOUGH_INFO '3': DISPUTED - name: evidences list: - name: evidence_id dtype: string - name: evidence_label dtype: class_label: names: '0': SUPPORTS '1': REFUTES '2': NOT_ENOUGH_INFO - name: article dtype: string - name: evidence dtype: string - name: entropy dtype: float32 - name: votes list: string splits: - name: test num_bytes: 2429272 num_examples: 1535 download_size: 687133 dataset_size: 2429272 --- # Dataset Card for ClimateFever ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CLIMATE-FEVER homepage](http://climatefever.ai) - **Repository:** [CLIMATE-FEVER repository](https://github.com/tdiggelm/climate-fever-dataset) - **Paper:** [CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims](https://arxiv.org/abs/2012.00614) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Thomas Diggelmann](mailto:thomasdi@student.ethz.ch) ### Dataset Summary A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The text in the dataset is in English, as found in real-world claims about climate-change on the Internet. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances ``` { "claim_id": "0", "claim": "Global warming is driving polar bears toward extinction", "claim_label": 0, # "SUPPORTS" "evidences": [ { "evidence_id": "Extinction risk from global warming:170", "evidence_label": 2, # "NOT_ENOUGH_INFO" "article": "Extinction risk from global warming", "evidence": "\"Recent Research Shows Human Activity Driving Earth Towards Global Extinction Event\".", "entropy": 0.6931471805599453, "votes": [ "SUPPORTS", "NOT_ENOUGH_INFO", null, null, null ] }, { "evidence_id": "Global warming:14", "evidence_label": 0, # "SUPPORTS" "article": "Global warming", "evidence": "Environmental impacts include the extinction or relocation of many species as their ecosystems change, most immediately the environments of coral reefs, mountains, and the Arctic.", "entropy": 0.0, "votes": [ "SUPPORTS", "SUPPORTS", null, null, null ] }, { "evidence_id": "Global warming:178", "evidence_label": 2, # "NOT_ENOUGH_INFO" "article": "Global warming", "evidence": "Rising temperatures push bees to their physiological limits, and could cause the extinction of bee populations.", "entropy": 0.6931471805599453, "votes": [ "SUPPORTS", "NOT_ENOUGH_INFO", null, null, null ] }, { "evidence_id": "Habitat destruction:61", "evidence_label": 0, # "SUPPORTS" "article": "Habitat destruction", "evidence": "Rising global temperatures, caused by the greenhouse effect, contribute to habitat destruction, endangering various species, such as the polar bear.", "entropy": 0.0, "votes": [ "SUPPORTS", "SUPPORTS", null, null, null ] }, { "evidence_id": "Polar bear:1328", "evidence_label": 2, # "NOT_ENOUGH_INFO" "article": "Polar bear", "evidence": "\"Bear hunting caught in global warming debate\".", "entropy": 0.6931471805599453, "votes": [ "SUPPORTS", "NOT_ENOUGH_INFO", null, null, null ] } ] } ``` ### Data Fields - `claim_id`: a `string` feature, unique claim identifier. - `claim`: a `string` feature, claim text. - `claim_label`: a `int` feature, overall label assigned to claim (based on evidence majority vote). The label correspond to 0: "supports", 1: "refutes", 2: "not enough info" and 3: "disputed". - `evidences`: a list of evidences with fields: - `evidence_id`: a `string` feature, unique evidence identifier. - `evidence_label`: a `int` feature, micro-verdict label. The label correspond to 0: "supports", 1: "refutes" and 2: "not enough info". - `article`: a `string` feature, title of source article (Wikipedia page). - `evidence`: a `string` feature, evidence sentence. - `entropy`: a `float32` feature, entropy reflecting uncertainty of `evidence_label`. - `votes`: a `list` of `string` features, corresponding to individual votes. ### Data Splits This benchmark dataset currently consists of a single data split `test` that consists of 1,535 claims or 7,675 claim-evidence pairs. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ```bibtex @misc{diggelmann2020climatefever, title={CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims}, author={Thomas Diggelmann and Jordan Boyd-Graber and Jannis Bulian and Massimiliano Ciaramita and Markus Leippold}, year={2020}, eprint={2012.00614}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@tdiggelm](https://github.com/tdiggelm) for adding this dataset.
result-kand2-sdxl-wuerst-karlo/103deca7
2023-09-22T05:57:32.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
545
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 210 num_examples: 10 download_size: 1367 dataset_size: 210 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "103deca7" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
taishi-i/awesome-japanese-nlp-classification-dataset
2023-09-09T11:09:04.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "language:ja", "license:other", "code", "region:us" ]
taishi-i
This dataset determines whether a GitHub repository description relates to Japanese natural language processing (NLP). The labels are categorized as "Relevant (1)" and "Not Relevant (0)".
null
null
1
544
--- license: other task_categories: - text-classification language: - en - ja tags: - code size_categories: - 1K<n<10K --- # Dataset overview This dataset identifies whether a GitHub repository description pertains to Japanese natural language processing (NLP). The labels are categorized as **"Relevant (1)" and "Not Relevant (0)"**. Problem Setting: - Training Data: Repository descriptions from before 2022 - Test Data: Repository descriptions from 2023 - Objective: To detect repositories related to Japanese NLP Data Collection: - Positive Examples: Repositories listed in "[awesome-japanese-nlp-resources](https://github.com/taishi-i/awesome-japanese-nlp-resources)" as of September 9, 2023 - Negative Examples: Collected from the GitHub API and visually confirmed - Note: The annotation process is subjective Dataset Features: - Subjective labeling - Mixed English and Japanese descriptions - Imbalanced label distribution **These dataset features mirror real-world challenges and are ideal for evaluating models.** Based on GitHub's terms of service, please use this dataset for research purposes only. # How to use this dataset How to load in Python. ```python from datasets import load_dataset dataset = load_dataset("taishi-i/awesome-japanese-nlp-classification-dataset") ``` Details of the dataset. ```python DatasetDict({ train: Dataset({ features: ['label', 'text', 'url', 'created_at'], num_rows: 5496 }) validation: Dataset({ features: ['label', 'text', 'url', 'created_at'], num_rows: 400 }) test: Dataset({ features: ['label', 'text', 'url', 'created_at'], num_rows: 856 }) }) ``` # Baseline Baseline trained with [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased). Please use the baseline model from [here](https://huggingface.co/taishi-i/awesome-japanese-nlp-classification-model). The F1-score for label 1 is important for this task. | Label | Precision | Recall | F1-Score | Support | |--------------|-----------|--------|----------|---------| | 0 | 0.98 | 0.99 | 0.98 | 796 | | 1 | 0.79 | 0.70 | **0.74** | 60 | | Accuracy | | | 0.97 | 856 | | Macro Avg | 0.89 | 0.84 | 0.86 | 856 | | Weighted Avg | 0.96 | 0.97 | 0.97 | 856 | # Dataset stats Label distribution: | Dataset | Label 0 (%) | Label 1 (%) | |------------|-------------|-------------| | Train | 92.59 | 7.41 | | Validation | 95.75 | 4.25 | | Test | 92.99 | 7.01 | Relevant sample: ```python { "label": 1, "text": "JGLUE: Japanese General Language Understanding Evaluation for huggingface datasets", "url": "https://github.com/shunk031/huggingface-datasets_JGLUE", "created_at": "2023-02-25T04:33:03Z" } ``` Not Relevant sample: ```python { "label": 0, "text": "Official repository of FaceLit: Neural 3D Relightable Faces (CVPR 2023)", "url": "https://github.com/apple/ml-facelit", "created_at": "2023-04-03T22:47:29Z" } ``` Number of texts, average number of characters per text, minimum number of characters, maximum number of characters: | Dataset | Text Count | Average Length | Min Length | Max Length | |------------|------------|----------------|------------|------------| | Train | 5496 | 58.05 | 2.0 | 609.0 | | Validation | 400 | 54.33 | 8.0 | 226.0 | | Test | 856 | 58.85 | 3.0 | 341.0 | Proportion of text languages: | Dataset | English (%) | Japanese (%) | |------------|-------------|--------------| | Train | 89.34 | 10.66 | | Validation | 82.00 | 18.00 | | Test | 83.18 | 16.82 | Time range: | Dataset | Start Date | End Date | |---------|---------------------------|---------------------------| | Train | 2008-02-11 22:55:26+00:00 | 2022-09-30 19:45:09+00:00 | | Validation | 2022-10-01 06:02:56+00:00 | 2022-12-31 12:12:41+00:00 | | Test | 2023-01-01 06:15:03+00:00 | 2023-08-21 15:30:53+00:00 | # License We collect and publish this dataset under [GitHub Acceptable Use Policies - 7. Information Usage Restrictions](https://docs.github.com/en/site-policy/acceptable-use-policies/github-acceptable-use-policies#7-information-usage-restrictions) and [GitHub Terms of Service - H. API Terms](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service#h-api-terms) for research purposes. This dataset should be used solely for research verification purposes. Adhering to GitHub's regulations is mandatory.
nielsr/docvqa_1200_examples_donut
2022-08-05T16:39:23.000Z
[ "region:us" ]
nielsr
null
null
null
1
542
Entry not found
result-kand2-sdxl-wuerst-karlo/54ae8a8b
2023-09-22T08:45:06.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
542
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 200 num_examples: 10 download_size: 1374 dataset_size: 200 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "54ae8a8b" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
carblacac/twitter-sentiment-analysis
2022-10-25T05:42:06.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:apache-2.0", "region:us" ]
carblacac
The Twitter Sentiment Analysis Dataset contains 1,578,627 classified tweets, each row is marked as 1 for positive sentiment and 0 for negative sentiment. The dataset is based on data from the following two sources: University of Michigan Sentiment Analysis competition on Kaggle Twitter Sentiment Corpus by Niek Sanders Finally, I randomly selected a subset of them, applied a cleaning process, and divided them between the test and train subsets, keeping a balance between the number of positive and negative tweets within each of these subsets.
@InProceedings{thinknook:dataset, title = {Twitter Sentiment Analysis Training Corpus (Dataset)}, author={Ibrahim Naji}, year={2012} }
null
8
541
--- pretty_name: "TSATC: Twitter Sentiment Analysis Training Corpus" annotations_creators: - expert-generated language_creators: - other language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - feeling-classification paperswithcode_id: other configs: - None --- # Dataset Card for TSATC: Twitter Sentiment Analysis Training Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [TSATC](https://github.com/cblancac/SentimentAnalysisBert/blob/main/data) - **Repository:** [TSATC](https://github.com/cblancac/SentimentAnalysisBert/blob/main/data) - **Paper:** [TSATC: Twitter Sentiment Analysis Training Corpus](http://thinknook.com/twitter-sentiment-analysis-training-corpus-dataset-2012-09-22/) - **Point of Contact:** [Carlos Blanco](carblacac7@gmail.com) ### Dataset Summary TSATC: Twitter Sentiment Analysis Training Corpus The original Twitter Sentiment Analysis Dataset contains 1,578,627 classified tweets, each row is marked as 1 for positive sentiment and 0 for negative sentiment. It can be downloaded from http://thinknook.com/wp-content/uploads/2012/09/Sentiment-Analysis-Dataset.zip. The dataset is based on data from the following two sources: University of Michigan Sentiment Analysis competition on Kaggle Twitter Sentiment Corpus by Niek Sanders This dataset has been transformed, selecting in a random way a subset of them, applying a cleaning process, and dividing them between the test and train subsets, keeping a balance between the number of positive and negative tweets within each of these subsets. These two files can be founded on https://github.com/cblancac/SentimentAnalysisBert/blob/main/data. Finally, the train subset has been divided in two smallest datasets, train (80%) and validation (20%). The final dataset has been created with these two new subdatasets plus the previous test dataset. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances Below are two examples from the dataset: | | Text | Feeling | | :-- | :---------------------------- | :------ | | (1) | blaaah. I don't feel good aagain. | 0 | | (2) | My birthday is coming June 3. | 1 | ### Data Fields In the final dataset, all files are in the JSON format with f columns: | Column Name | Data | | :------------ | :-------------------------- | | text | A sentence (or tweet) | | feeling | The feeling of the sentence | Each feeling has two possible values: `0` indicates the sentence has a negative sentiment, while `1` indicates a positive feeling. ### Data Splits The number of examples and the proportion sentiments are shown below: | Data | Train | Validation | Test | | :------------------ | ------: | ------------: | ----: | | Size | 119.988 | 29.997 | 61.998 | | Labeled positive | 60.019 | 14.947 | 31029 | | Labeled negative | 59.969 | 15.050 | 30969 | ## Dataset Creation ### Curation Rationale Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like *flights from New York to Florida* and *flights from Florida to New York*. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Mentioned above. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Citation Information ``` @InProceedings{paws2019naacl, title = {{TSATC: Twitter Sentiment Analysis Training Corpus}}, author = {Ibrahim Naji}, booktitle = {thinknook}, year = {2012} } ``` ### Contributions Thanks to myself [@carblacac](https://github.com/cblancac/) for adding this transformed dataset from the original one.
ecthr_cases
2022-11-18T19:59:57.000Z
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "rationale-extraction", "legal-judgment-prediction", "arxiv:2103.13084", "region:us" ]
null
The ECtHR Cases dataset is designed for experimentation of neural judgment prediction and rationale extraction considering ECtHR cases.
@InProceedings{chalkidis-et-al-2021-ecthr, title = "Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases", author = "Chalkidis, Ilias and Fergadiotis, Manos and Tsarapatsanis, Dimitrios and Aletras, Nikolaos and Androutsopoulos, Ion and Malakasiotis, Prodromos", booktitle = "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics", year = "2021", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics" }
null
8
540
--- annotations_creators: - expert-generated - found language_creators: - found language: - en license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - multi-label-classification paperswithcode_id: ecthr pretty_name: European Court of Human Rights Cases tags: - rationale-extraction - legal-judgment-prediction dataset_info: - config_name: alleged-violation-prediction features: - name: facts sequence: string - name: labels sequence: string - name: silver_rationales sequence: int32 - name: gold_rationales sequence: int32 splits: - name: train num_bytes: 89835266 num_examples: 9000 - name: test num_bytes: 11917598 num_examples: 1000 - name: validation num_bytes: 11015998 num_examples: 1000 download_size: 32815448 dataset_size: 112768862 - config_name: violation-prediction features: - name: facts sequence: string - name: labels sequence: string - name: silver_rationales sequence: int32 splits: - name: train num_bytes: 89776410 num_examples: 9000 - name: test num_bytes: 11909314 num_examples: 1000 - name: validation num_bytes: 11009350 num_examples: 1000 download_size: 32815448 dataset_size: 112695074 --- # Dataset Card for the ECtHR cases dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://archive.org/details/ECtHR-NAACL2021/ - **Repository:** http://archive.org/details/ECtHR-NAACL2021/ - **Paper:** https://arxiv.org/abs/2103.13084 - **Leaderboard:** TBA - **Point of Contact:** [Ilias Chalkidis](mailto:ihalk@aueb.gr) ### Dataset Summary The European Court of Human Rights (ECtHR) hears allegations regarding breaches in human rights provisions of the European Convention of Human Rights (ECHR) by European states. The Convention is available at https://www.echr.coe.int/Documents/Convention_ENG.pdf. The court rules on a subset of all ECHR articles, which are predefined (alleged) by the applicants (*plaintiffs*). Our dataset comprises 11k ECtHR cases and can be viewed as an enriched version of the ECtHR dataset of Chalkidis et al. (2019), which did not provide ground truth for alleged article violations (articles discussed) and rationales. The new dataset includes the following: **Facts:** Each judgment includes a list of paragraphs that represent the facts of the case, i.e., they describe the main events that are relevant to the case, in numbered paragraphs. We hereafter call these paragraphs *facts* for simplicity. Note that the facts are presented in chronological order. Not all facts have the same impact or hold crucial information with respect to alleged article violations and the court's assessment; i.e., facts may refer to information that is trivial or otherwise irrelevant to the legally crucial allegations against *defendant* states. **Allegedly violated articles:** Judges rule on specific accusations (allegations) made by the applicants (Harris, 2018). In ECtHR cases, the judges discuss and rule on the violation, or not, of specific articles of the Convention. The articles to be discussed (and ruled on) are put forward (as alleged article violations) by the applicants and are included in the dataset as ground truth; we identify 40 violable articles in total. The rest of the articles are procedural, i.e., the number of judges, criteria for office, election of judges, etc. In our experiments, however, the models are not aware of the allegations. They predict the Convention articles that will be discussed (the allegations) based on the case's facts, and they also produce rationales for their predictions. Models of this kind could be used by potential applicants to help them formulate future allegations (articles they could claim to have been violated), as already noted, but here we mainly use the task as a test-bed for rationale extraction. **Violated articles:** The court decides which allegedly violated articles have indeed been violated. These decisions are also included in our dataset and could be used for full legal judgment prediction experiments (Chalkidis et al., 2019). However, they are not used in the experiments of this work. **Silver allegation rationales:** Each decision of the ECtHR includes references to facts of the case (e.g., *"See paragraphs 2 and 4."*) and case law (e.g., *"See Draci vs. Russia (2010)"*.). We identified references to each case's facts and retrieved the corresponding paragraphs using regular expressions. These are included in the dataset as silver allegation rationales, on the grounds that the judges refer to these paragraphs when ruling on the allegations. **Gold allegation rationales:** A legal expert with experience in ECtHR cases annotated a subset of 50 test cases to identify the relevant facts (paragraphs) of the case that support the allegations (alleged article violations). In other words, each identified fact justifies (hints) one or more alleged violations. ### Supported Tasks and Leaderboards The dataset supports: **Alleged violation prediction** (`alleged-violation-prediction`): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the 40 violable ECHR articles were allegedly violated according to the applicant(s). Consult Chalkidis et al. (2021), for details. **Violation prediction** (`violation-prediction`): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the allegedly violated ECHR articles were violated, as decided (ruled) by the ECtHR court. Consult Chalkidis et al. (2019), for details. **Rationale extraction:** A model can also predict the facts of the case that most prominently support its decision with respect to a classification task. Silver rationales can be used for both classification tasks, while gold rationales are only focused on the *alleged violation prediction* task. ### Languages All documents are written in English. ## Dataset Structure ### Data Instances This example was too long and was cropped: ```json { "facts": [ "8. In 1991 Mr Dusan Slobodnik, a research worker in the field of literature, ...", "9. On 20 July 1992 the newspaper Telegraf published a poem by the applicant.", "10. The poem was later published in another newspaper.", "...", "39. The City Court further dismissed the claim in respect of non-pecuniary damage ... ", "40. The City Court ordered the plaintiff to pay SKK 56,780 to the applicant ...", "41. On 25 November 1998 the Supreme Court upheld the decision of the Bratislava City Court ..." ], "labels": ["14", "10", "9", "36"], "silver_rationales": [27], "gold_rationales": [] } ``` ### Data Fields `facts`: (**List[str]**) The paragraphs (facts) of the case.\ `labels`: (**List[str]**) The ECHR articles under discussion (*Allegedly violated articles*); or the allegedly violated ECHR articles that found to be violated by the court (judges).\ `silver_rationales`: (**List[int]**) Indices of the paragraphs (facts) that are present in the court's assessment.\ `gold_rationales`: (**List[int]**) Indices of the paragraphs (facts) that support alleged violations, according to a legal expert. ### Data Splits | Split | No of ECtHR cases | Silver rationales ratio | Avg. allegations / case | | ------------------- | ------------------------------------ | --- | --- | | Train | 9,000 | 24% | 1.8 | |Development | 1,000 | 30% | 1.7 | |Test | 1,000 | 31% | 1.7 | ## Dataset Creation ### Curation Rationale The dataset was curated by Chalkidis et al. (2021).\ The annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School). ### Source Data #### Initial Data Collection and Normalization The original data are available at HUDOC database (https://hudoc.echr.coe.int/eng) in an unprocessed format. The data were downloaded and all information was extracted from the HTML files and several JSON metadata files. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process * The original documents are available in HTML format at HUDOC database (https://hudoc.echr.coe.int/eng), except the gold rationales. The metadata are provided by additional JSON files, produced by REST services. * The annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School). #### Who are the annotators? Dimitris Tsarapatsanis (Lecturer, York Law School). ### Personal and Sensitive Information Privacy statement / Protection of personal data from HUDOC (https://www.echr.coe.int/Pages/home.aspx?p=privacy) ``` The Court complies with the Council of Europe's policy on protection of personal data, in so far as this is consistent with exercising its functions under the European Convention on Human Rights. The Council of Europe is committed to respect for private life. Its policy on protection of personal data is founded on the Secretary General’s Regulation of 17 April 1989 outlining a data protection system for personal data files in the Council of Europe. Most pages of the Council of Europe site require no personal information except in certain cases to allow requests for on-line services to be met. In such cases, the information is processed in accordance with the Confidentiality policy described below. ``` ## Considerations for Using the Data ### Social Impact of Dataset The publication of this dataset complies with the ECtHR data policy (https://www.echr.coe.int/Pages/home.aspx?p=privacy). By no means do we aim to build a 'robot' lawyer or judge, and we acknowledge the possible harmful impact (Angwin et al., 2016, Dressel et al., 2018) of irresponsible deployment. Instead, we aim to support fair and explainable AI-assisted judicial decision making and empirical legal studies. For example, automated services can help applicants (plaintiffs) identify alleged violations that are supported by the facts of a case. They can help judges identify more quickly facts that support the alleged violations, contributing towards more informed judicial decision making (Zhong et al., 2020). They can also help legal experts identify previous cases related to particular allegations, helping analyze case law (Katz et al., 2012). Also, consider ongoing critical research on responsible AI (Elish et al., 2021) that aims to provide explainable and fair systems to support human experts. ### Discussion of Biases Consider the work of Chalkidis et al. (2019) for the identification of demographic bias by models. ### Other Known Limitations N/A ## Additional Information ### Dataset Curators Ilias Chalkidis and Dimitris Tsarapatsanis ### Licensing Information **CC BY-NC-SA (Creative Commons / Attribution-NonCommercial-ShareAlike)** Read more: https://creativecommons.org/licenses/by-nc-sa/4.0/. ### Citation Information *Ilias Chalkidis, Manos Fergadiotis, Dimitrios Tsarapatsanis, Nikolaos Aletras, Ion Androutsopoulos and Prodromos Malakasiotis. Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases.* *Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021). Mexico City, Mexico. 2021.* ``` @InProceedings{chalkidis-et-al-2021-ecthr, title = "Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases", author = "Chalkidis, Ilias and Fergadiotis, Manos and Tsarapatsanis, Dimitrios and Aletras, Nikolaos and Androutsopoulos, Ion and Malakasiotis, Prodromos", booktitle = "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics", year = "2021", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics" } ``` *Ilias Chalkidis, Ion Androutsopoulos and Nikolaos Aletras. Neural Legal Judgment Prediction in English.* *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019.* ``` @InProceedings{chalkidis-etal-2019-neural, title = "Neural Legal Judgment Prediction in {E}nglish", author = "Chalkidis, Ilias and Androutsopoulos, Ion and Aletras, Nikolaos", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1424", doi = "10.18653/v1/P19-1424", pages = "4317--4323" } ``` ### Contributions Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
emrgnt-cmplxty/sciphi-textbooks-are-all-you-need
2023-09-30T21:57:36.000Z
[ "license:llama2", "region:us" ]
emrgnt-cmplxty
null
null
null
82
540
--- dataset_info: features: - name: formatted_prompt dtype: string - name: completion dtype: string - name: first_task dtype: string - name: second_task dtype: string - name: last_task dtype: string - name: notes dtype: string - name: title dtype: string - name: model dtype: string - name: temperature dtype: float64 splits: - name: train num_bytes: 3175095649 num_examples: 681845 download_size: 1280399468 dataset_size: 3175095649 configs: - config_name: default data_files: - split: train path: data/train-* license: llama2 --- ## Textbooks are all you need : A SciPhi Collection Dataset Description With LLMs, we can create a fully open-source Library of Alexandria. As a first attempt, we have generated 650,000 unique textbook samples from a diverse span of courses, kindergarten through graduate school. These are open source samples, which likely fall under the Llama-2 license. They were generated using the [SciPhi](https://github.com/emrgnt-cmplxty/SciPhi) repository. All samples were created with [TheBloke/Phind-CodeLlama-34B-v2-AWQ](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-AWQ). Lastly, I owe thanks to Runpod for the generous GPU time to make this possible.