id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
6.67k
citation
stringlengths
0
10.7k
likes
int64
0
3.66k
downloads
int64
0
8.89M
created
timestamp[us]
card
stringlengths
11
977k
card_len
int64
11
977k
embeddings
list
wmt20_mlqe_task2
2023-06-01T14:59:47.000Z
[ "task_categories:translation", "task_categories:text-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:extended|wikipedia", "language:de", "langu...
null
This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task. Task 2 evaluates the application of QE for post-editing purposes. It consists of predicting: - A/ Word-level tags. This is done both on source side (to detect which words caused errors) and target side (to detect mistranslated or missing words). - A1/ Each token is tagged as either `OK` or `BAD`. Additionally, each gap between two words is tagged as `BAD` if one or more missing words should have been there, and `OK` otherwise. Note that number of tags for each target sentence is 2*N+1, where N is the number of tokens in the sentence. - A2/ Tokens are tagged as `OK` if they were correctly translated, and `BAD` otherwise. Gaps are not tagged. - B/ Sentence-level HTER scores. HTER (Human Translation Error Rate) is the ratio between the number of edits (insertions/deletions/replacements) needed and the reference translation length.
Not available.
2
135
2022-03-02T23:29:22
--- annotations_creators: - expert-generated - machine-generated language_creators: - found language: - de - en - zh license: - unknown multilinguality: - translation size_categories: - 1K<n<10K source_datasets: - extended|wikipedia task_categories: - translation - text-classification task_ids: [] pretty_name: WMT20 - MultiLingual Quality Estimation (MLQE) Task2 tags: - translation-quality-estimation dataset_info: - config_name: en-de features: - name: translation dtype: translation: languages: - en - de - name: src_tags sequence: class_label: names: '0': BAD '1': OK - name: mt_tags sequence: class_label: names: '0': BAD '1': OK - name: pe dtype: string - name: hter dtype: float32 - name: alignments sequence: sequence: int32 splits: - name: train num_bytes: 6463930 num_examples: 7000 - name: test num_bytes: 425582 num_examples: 1000 - name: validation num_bytes: 927616 num_examples: 1000 download_size: 1377020 dataset_size: 7817128 - config_name: en-zh features: - name: translation dtype: translation: languages: - en - zh - name: src_tags sequence: class_label: names: '0': BAD '1': OK - name: mt_tags sequence: class_label: names: '0': BAD '1': OK - name: pe dtype: string - name: hter dtype: float32 - name: alignments sequence: sequence: int32 splits: - name: train num_bytes: 6786898 num_examples: 7000 - name: test num_bytes: 443740 num_examples: 1000 - name: validation num_bytes: 954710 num_examples: 1000 download_size: 1564953 dataset_size: 8185348 config_names: - en-de - en-zh --- # Dataset Card for WMT20 - MultiLingual Quality Estimation (MLQE) Task2 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WMT20 Quality Estimation Shared Task](http://www.statmt.org/wmt20/quality-estimation-task.html) - **Repository**: [Github repository](https://github.com/deep-spin/deep-spin.github.io/tree/master/docs/data/wmt2020_qe) - **Paper:** *Not available* ### Dataset Summary From the homepage: *This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task.* *Task 1 evaluates the application of QE for post-editing purposes. It consists of predicting:* - ***Word-level tags.*** *This is done both on source side (to detect which words caused errors) and target side (to detect mistranslated or missing words).* - ***Target.*** *Each token is tagged as either `OK` or `BAD`. Additionally, each gap between two words is tagged as `BAD` if one or more missing words should have been there, and `OK` otherwise. Note that number of tags for each target sentence is 2*N+1, where N is the number of tokens in the sentence.* - ***Source.*** *Tokens are tagged as `OK` if they were correctly translated, and `BAD` otherwise. Gaps are not tagged.* - ***Sentence-level HTER scores.*** *HTER (Human Translation Error Rate) is the ratio between the number of edits (insertions/deletions/replacements) needed and the reference translation length.* ### Supported Tasks and Leaderboards From the homepage: *For sentence-level QE, submissions are evaluated in terms of the Pearson's correlation metric for the sentence-level HTER prediction. For word-level QE, they will be evaluated in terms of MCC ([Matthews correlation coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)). These are the [official evaluation scripts](https://github.com/sheffieldnlp/qe-eval-scripts).* ### Languages There are two language pairs in this dataset: - English - German (`en` - `de`) - German - Chinese (`en` - `zh`) ## Dataset Structure ### Data Instances An example looks like this: ``` { 'translation': { 'en': 'favorite fish include cod , salmon , winter flounder , haddock , striped bass , pollock , hake , bluefish , and , in southern New England , Tautog .', 'de': 'zu den Lieblingsfischen gehören Kabeljau , Lachs , Winterflounder , Schellfisch , gestreifter Bass , Pollock , Seehecht , Rotbarsch und in Südengland Tautog .', } 'src_tags': [1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1], 'mt_tags': [1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1], 'pe': 'zu den Lieblingsfischen zählen Kabeljau , Lachs , Winterflunder , Schellfisch , Wolfsbarsch , Pollock , Seehecht , Bluefish und im Süden Neuenglands Tautog .', 'hter': 0.3199999928474426, 'alignments': [[2, 0], [2, 1], [2, 3], [3, 2], [3, 4], [4, 5], [5, 6], [6, 5], [7, 6], [8, 6], [9, 7], [10, 8], [10, 10], [11, 9], [12, 12], [13, 13], [14, 11], [15, 12], [15, 15], [16, 14], [17, 17], [19, 16], [20, 16], [21, 20], [22, 18], [23, 19], [23, 21], [24, 22], [25, 21], [26, 22], [27, 22], [28, 23], [29, 24]], } ``` ### Data Fields - `translation`: Dictionary with pairs (source,target). - src_lg: sequence of text in source language. - tgt_lg: sequence of text in target language. - `src_tags`: source word-level tags. `0`=`BAD`, `1`=`OK`. `[]` if N/A (only for test). - `mt_tags`: target word-level tags. `0`=`BAD`, `1`=`OK`. `[]` if N/A (only for test). - `pe`: post-edited version of NMT output. `""` if N/A (only for test). - `hter`: human translation error rate. `-10_000` if N/A (only for test). - `alignments`: Word aligments. List of pairs of integers. ### Data Splits There are 2 configurations in this dataset (one for each available language pair). Each configuration is composed of 7K examples for training, 1K for validation and 1K for (blind) test. ## Dataset Creation ### Curation Rationale The original text is extracted from Wikipedia. From the homepage: *Word-level labels have been obtained by using the alignments provided by the [TER](http://www.cs.umd.edu/~snover/tercom/) tool (settings: tokenised, case insensitive, exact matching only, disabling shifts by using the `-d 0` option) between machine translations and their post-edited versions. Shifts (word order errors) were not annotated as such (but rather as deletions + insertions) to avoid introducing noise in the annotation.* *HTER values are obtained deterministically from word-level tags. However, when computing HTER, we allow shifts in TER.* *The baseline system is a neural predictor-estimator approach implemented in [OpenKiwi](https://github.com/Unbabel/OpenKiwi) ([Kepler at al., 2019](https://arxiv.org/abs/1902.08646)), where the predictor model will be trained on the parallel data used to train the NMT model.* ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown ### Citation Information ``` Not available. ``` ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
9,278
[ [ -0.03265380859375, -0.035675048828125, 0.0238800048828125, 0.01035308837890625, -0.019866943359375, 0.0004425048828125, -0.022308349609375, -0.026092529296875, 0.02630615234375, 0.023223876953125, -0.045867919921875, -0.07159423828125, -0.04742431640625, 0.0...
NbAiLab/norwegian_parliament
2022-07-01T19:51:13.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:no", "license:cc-by-4.0", "region:us" ]
NbAiLab
The Norwegian Parliament Speeches is a collection of text passages from 1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti.
@InProceedings{--, author = {---}, title = {---}, booktitle = {---}, year = 2021, address = "---" }
1
135
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - no license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification --- # Dataset Card Creation Guide ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** N/A - **Repository:** [GitHub](https://github.com/ltgoslo/NorBERT/) - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** - ### Dataset Summary The Norwegian Parliament Speeches is a collection of text passages from 1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti. The dataset is annotated with the party the speaker was associated with at the time (dates of speeches are also included). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in Norwegian. ## Dataset Structure ### Data Instances Example of one instance in the dataset. ```{'label': 0, 'text': 'Verre er det med slagsmålene .'}``` ### Data Fields - `id`: index of the example - `text`: Text of a speech - `date`: Date (`YYYY-MM-DD`) the speech was produced - `label`: Political party the speaker was associated with at the time - 0 = Fremskrittspartiet - 1 = Sosialistisk Venstreparti ### Data Splits The dataset is split into a `train`, `validation`, and `test` split with the following sizes: | | Tain | Valid | Test | | ----- | ------ | ----- | ----- | | Number of examples | 3600 | 1200 | 1200 | The dataset is balanced on political party. ## Dataset Creation This dataset is based on the publicly available information by Norwegian Parliament (Storting) and created by the National Library of Norway AI-Lab to benchmark their language models. ## Additional Information ### Licensing Information This work is licensed under a Creative Commons Attribution 4.0 International License ### Citation Information ```latex @misc{--, title={--}, author={--}, year={2021}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
3,184
[ [ -0.0265655517578125, -0.0455322265625, -0.0007767677307128906, 0.009552001953125, -0.0360107421875, -0.011474609375, -0.0288848876953125, -0.00891876220703125, 0.0281829833984375, 0.03753662109375, -0.041168212890625, -0.06256103515625, -0.039337158203125, 0...
keshan/clean-si-mc4
2021-07-14T10:14:11.000Z
[ "region:us" ]
keshan
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is the processed version of Google's mC4 dataset by AllenAI.
@article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, }
0
135
2022-03-02T23:29:22
A cleaned version of MC4 dataset for Sinhala, config is a direct adaptation of MC4 original processing script.
110
[ [ -0.034027099609375, -0.0281524658203125, -0.01519012451171875, -0.016754150390625, -0.03662109375, 0.0060577392578125, -0.0118408203125, -0.015869140625, 0.0247039794921875, 0.07684326171875, -0.07293701171875, -0.0218048095703125, -0.01050567626953125, 0.03...
medalpaca/medical_meadow_pubmed_causal
2023-04-06T17:01:00.000Z
[ "task_categories:question-answering", "language:en", "region:us" ]
medalpaca
null
null
2
135
2023-04-06T16:59:22
--- task_categories: - question-answering language: - en --- # Dataset Card for Pubmed Causal ## Dataset Description - **Paper:** https://aclanthology.org/D19-1473/ ### Dataset Summary This is the dataset used in the paper: Detecting Causal Language Use in Science Findings. ### Citation Information ``` @inproceedings{yu-etal-2019-detecting, title = "Detecting Causal Language Use in Science Findings", author = "Yu, Bei and Li, Yingya and Wang, Jun", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D19-1473", doi = "10.18653/v1/D19-1473", pages = "4664--4674", } ```
920
[ [ 0.0012578964233398438, -0.056396484375, 0.03564453125, 0.036102294921875, -0.0218505859375, -0.0274200439453125, -0.0145721435546875, -0.03045654296875, 0.02392578125, 0.031890869140625, -0.0253448486328125, -0.046295166015625, -0.04522705078125, 0.041137695...
distil-whisper/tedlium
2023-09-25T10:30:14.000Z
[ "task_categories:automatic-speech-recognition", "language:en", "license:cc-by-nc-nd-3.0", "region:us" ]
distil-whisper
The TED-LIUM corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.
null
0
135
2023-04-10T07:32:45
--- license: cc-by-nc-nd-3.0 task_categories: - automatic-speech-recognition language: - en -pretty_name: TEDLIUM --- # Distil Whisper: TEDLIUM This is a variant of the [TEDLIUM](https://huggingface.co/datasets/LIUM/tedlium) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/LIUM/tedlium). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/tedlium", "release3") # take the first sample of the validation set sample = dataset["validation"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/tedlium", "release3", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under cc-by-nc-nd-3.0.
2,011
[ [ -0.003021240234375, -0.045013427734375, 0.018707275390625, 0.026275634765625, -0.01232147216796875, 0.0050506591796875, -0.018585205078125, -0.0110321044921875, 0.02978515625, 0.0306243896484375, -0.063232421875, -0.040985107421875, -0.0390625, 0.01001739501...
distil-whisper/ami-sdm
2023-09-25T10:30:13.000Z
[ "task_categories:automatic-speech-recognition", "language:en", "license:cc-by-4.0", "region:us" ]
distil-whisper
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers. \n
@inproceedings{10.1007/11677482_3, author = {Carletta, Jean and Ashby, Simone and Bourban, Sebastien and Flynn, Mike and Guillemot, Mael and Hain, Thomas and Kadlec, Jaroslav and Karaiskos, Vasilis and Kraaij, Wessel and Kronenthal, Melissa and Lathoud, Guillaume and Lincoln, Mike and Lisowska, Agnes and McCowan, Iain and Post, Wilfried and Reidsma, Dennis and Wellner, Pierre}, title = {The AMI Meeting Corpus: A Pre-Announcement}, year = {2005}, isbn = {3540325492}, publisher = {Springer-Verlag}, address = {Berlin, Heidelberg}, url = {https://doi.org/10.1007/11677482_3}, doi = {10.1007/11677482_3}, abstract = {The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly. Some of the meetings it contains are naturally occurring, and some are elicited, particularly using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The corpus is being recorded using a wide range of devices including close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, and individual pens, all of which produce output signals that are synchronized with each other. It is also being hand-annotated for many different phenomena, including orthographic transcription, discourse properties such as named entities and dialogue acts, summaries, emotions, and some head and hand gestures. We describe the data set, including the rationale behind using elicited material, and explain how the material is being recorded, transcribed and annotated.}, booktitle = {Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction}, pages = {28–39}, numpages = {12}, location = {Edinburgh, UK}, series = {MLMI'05} }
0
135
2023-04-11T20:12:21
--- license: cc-by-4.0 task_categories: - automatic-speech-recognition language: - en -pretty_name: AMI SDM --- # Distil Whisper: AMI SDM This is a variant of the [AMI SDM](https://huggingface.co/datasets/edinburghstr/ami) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/edinburghstr/ami). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/ami-sdm", "sdm") # take the first sample of the validation set sample = dataset["validation"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/ami-sdm", "sdm", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under cc-by-4.0.
1,997
[ [ -0.0182342529296875, -0.0404052734375, 0.0234832763671875, 0.0268707275390625, -0.0190582275390625, 0.00290679931640625, -0.006378173828125, -0.00684356689453125, 0.033660888671875, 0.042144775390625, -0.06048583984375, -0.041351318359375, -0.0499267578125, ...
zeio/baneks
2023-10-12T18:39:40.000Z
[ "task_categories:text-generation", "language_creators:crowdsourced", "language_creators:original", "size_categories:10K<n<100K", "language:ru", "language:en", "license:apache-2.0", "not-for-all-audiences", "art", "humour", "jokes", "region:us" ]
zeio
null
null
0
135
2023-10-10T00:49:24
--- language: - ru - en license: apache-2.0 tags: - not-for-all-audiences - art - humour - jokes annotation_creators: - crowdsourced - original language_creators: - crowdsourced - original pretty_name: baneks size_categories: - 10K<n<100K task_categories: - text-generation --- # Dataset card for baneks ## Table of contents - [Dataset description](#dataset-description) - [Dataset summary](#dataset-summary) - [Dataset structure](#dataset-structure) - [Dataset instance](#dataset-instance) - [Dataset fields](#dataset-fields) ## Dataset description - **Homepage:** [baneks homepage]() - **Repository:** [baneks repository](https://huggingface.co/datasets/zeio/baneks) - **Point of contact:** [Zeio Nara](mailto:zeionara@gmail.com) - **Dataset version:** `10.10.2023` ### Dataset summary This dataset contains anekdotes parsed from a few vk social network communities. Since the dataset is regularly updated, there is no fixed number of entries, so stay tuned. There are three configurations available withing the dataset: - **inflated** - a transparent configuration, which introduces minimal changes to the source data; - **censored** - same as inflated, but entries with same texts are grouped and aggregated; - **default** - same as censored, but profane words are replaced with their inferred original form in cases when they were censored initially. ## Dataset structure ### Data instance An example of an entry from the dataset is given below: ```json { "text": "- Папа, а кто такие алкоголики? - Ну, сынок.. Вот, видишь - четыре гендера стоят? А алкоголику кажется, что там восемь гендеров - Пап, там два гендера.", "published": "16-09-2023 01:38", "id": 497393, "n-likes": 13, "n-views": 804, "accessed": "16-09-2023 01:51", "source": "anekdotikategoriib" } ``` ### Data fields Each dataset entry therefore consists of the following fields: - `text` - text representation of the anecdote; - `published` - publication date of the corresponding post in the format `DD-MM-YYYY hh:mm`; - `id` - id of the corresponding post; - `n-likes` - number of likes received by the corresponding post up to the access date; - `n-views` - number of views received by the corresponding post up to the access date; - `accessed`- access date of the corresponding post in the format `DD-MM-YYYY hh:mm`; - `source` - community name in which the corresponding post has been published.
2,414
[ [ -0.030731201171875, -0.036651611328125, 0.0271453857421875, 0.01568603515625, -0.04193115234375, -0.005802154541015625, -0.006511688232421875, -0.02166748046875, 0.058319091796875, 0.044464111328125, -0.058074951171875, -0.086669921875, -0.05218505859375, 0....
tuanio/book_corpus-input_ids-invalid-random_shuffle-len256
2023-10-26T09:02:25.000Z
[ "region:us" ]
tuanio
null
null
0
135
2023-10-25T11:51:22
--- dataset_info: features: - name: input_ids sequence: int32 splits: - name: train num_bytes: 6319283552 num_examples: 6147163 download_size: 3367167037 dataset_size: 6319283552 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "book_corpus-input_ids-invalid-random_shuffle-len256" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
498
[ [ -0.0252227783203125, -0.0234222412109375, 0.00930023193359375, 0.028839111328125, -0.030303955078125, 0.0015897750854492188, 0.0096893310546875, 0.004302978515625, 0.03778076171875, 0.025360107421875, -0.050872802734375, -0.0552978515625, -0.045379638671875, ...
conv_questions
2023-06-02T12:18:49.000Z
[ "task_categories:question-answering", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:open-domain-qa", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source...
null
ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata. The questions feature a variety of complex question phenomena like comparisons, aggregations, compositionality, and temporal reasoning.
@InProceedings{christmann2019look, title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion}, author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard}, booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management}, pages={729--738}, year={2019} }
3
134
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en language_bcp47: - en-US license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering - text-generation - fill-mask task_ids: - open-domain-qa - dialogue-modeling pretty_name: ConvQuestions dataset_info: features: - name: domain dtype: string - name: seed_entity dtype: string - name: seed_entity_text dtype: string - name: questions sequence: string - name: answers sequence: sequence: string - name: answer_texts sequence: string splits: - name: train num_bytes: 3589880 num_examples: 6720 - name: validation num_bytes: 1241778 num_examples: 2240 - name: test num_bytes: 1175656 num_examples: 2240 download_size: 3276017 dataset_size: 6007314 --- # Dataset Card for ConvQuestions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ConvQuestions page](https://convex.mpi-inf.mpg.de) - **Repository:** [GitHub](https://github.com/PhilippChr/CONVEX) - **Paper:** [Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion](https://arxiv.org/abs/1910.03262) - **Leaderboard:** [ConvQuestions leaderboard](https://convex.mpi-inf.mpg.de) - **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de) ### Dataset Summary ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata. They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk, with conversations from five domains: Books, Movies, Soccer, Music, and TV Series. The questions feature a variety of complex question phenomena like comparisons, aggregations, compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable fair comparison across diverse methods. The data gathering setup was kept as natural as possible, with the annotators selecting entities of their choice from each of the five domains, and formulating the entire conversation in one session. All questions in a conversation are from the same Turker, who also provided gold answers to the questions. For suitability to knowledge graphs, questions were constrained to be objective or factoid in nature, but no other restrictive guidelines were set. A notable property of ConvQuestions is that several questions are not answerable by Wikidata alone (as of September 2019), but the required facts can, for example, be found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper (https://dl.acm.org/citation.cfm?id=3358016). ### Supported Tasks and Leaderboards [Needs More Information] ### Languages en ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { 'domain': 'music', 'seed_entity': 'https://www.wikidata.org/wiki/Q223495', 'seed_entity_text': 'The Carpenters', 'questions': [ 'When did The Carpenters sign with A&M Records?', 'What song was their first hit?', 'When did Karen die?', 'Karen had what eating problem?', 'and how did she die?' ], 'answers': [ [ '1969' ], [ 'https://www.wikidata.org/wiki/Q928282' ], [ '1983' ], [ 'https://www.wikidata.org/wiki/Q131749' ], [ 'https://www.wikidata.org/wiki/Q181754' ] ], 'answer_texts': [ '1969', '(They Long to Be) Close to You', '1983', 'anorexia nervosa', 'heart failure' ] } ``` ### Data Fields - `domain`: a `string` feature. Any of: ['books', 'movies', 'music', 'soccer', 'tv_series'] - `seed_entity`: a `string` feature. Wikidata ID of the topic entity. - `seed_entity_text`: a `string` feature. Surface form of the topic entity. - `questions`: a `list` of `string` features. List of questions (initial question and follow-up questions). - `answers`: a `list` of `lists` of `string` features. List of answers, given as Wikidata IDs or literals (e.g. timestamps or names). - `answer_texts`: a `list` of `string` features. List of surface forms of the answers. ### Data Splits |train|validation|tests| |----:|---------:|----:| | 6720| 2240| 2240| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process With insights from a meticulous in-house pilot study with ten students over two weeks, the authors posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants. #### Who are the annotators? Local students (Saarland Informatics Campus) and AMT Master Workers. ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information The ConvQuestions benchmark is licensed under a Creative Commons Attribution 4.0 International License. ### Citation Information ``` @InProceedings{christmann2019look, title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion}, author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard}, booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management}, pages={729--738}, year={2019} } ``` ### Contributions Thanks to [@PhilippChr](https://github.com/PhilippChr) for adding this dataset.
7,242
[ [ -0.06195068359375, -0.0770263671875, 0.0212249755859375, -0.0095062255859375, -0.00824737548828125, -0.00159454345703125, -0.0171661376953125, -0.0217437744140625, 0.0286102294921875, 0.04229736328125, -0.07257080078125, -0.04925537109375, -0.04144287109375, ...
kilt_wikipedia
2023-04-05T10:08:59.000Z
[ "region:us" ]
null
KILT-Wikipedia: Wikipedia pre-processed for KILT.
@inproceedings{fb_kilt, author = {Fabio Petroni and Aleksandra Piktus and Angela Fan and Patrick Lewis and Majid Yazdani and Nicola De Cao and James Thorne and Yacine Jernite and Vassilis Plachouras and Tim Rockt\"aschel and Sebastian Riedel}, title = {{KILT:} a {B}enchmark for {K}nowledge {I}ntensive {L}anguage {T}asks}, journal = {CoRR}, archivePrefix = {arXiv}, year = {2020},
10
134
2022-03-02T23:29:22
--- paperswithcode_id: null pretty_name: KiltWikipedia dataset_info: features: - name: kilt_id dtype: string - name: wikipedia_id dtype: string - name: wikipedia_title dtype: string - name: text sequence: - name: paragraph dtype: string - name: anchors sequence: - name: paragraph_id dtype: int32 - name: start dtype: int32 - name: end dtype: int32 - name: text dtype: string - name: href dtype: string - name: wikipedia_title dtype: string - name: wikipedia_id dtype: string - name: categories dtype: string - name: wikidata_info struct: - name: description dtype: string - name: enwikiquote_title dtype: string - name: wikidata_id dtype: string - name: wikidata_label dtype: string - name: wikipedia_title dtype: string - name: aliases sequence: - name: alias dtype: string - name: history struct: - name: pageid dtype: int32 - name: parentid dtype: int32 - name: revid dtype: int32 - name: pre_dump dtype: bool - name: timestamp dtype: string - name: url dtype: string config_name: '2019-08-01' splits: - name: full num_bytes: 29372535718 num_examples: 5903530 download_size: 37318876722 dataset_size: 29372535718 --- # Dataset Card for "kilt_wikipedia" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/facebookresearch/KILT](https://github.com/facebookresearch/KILT) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 37.32 GB - **Size of the generated dataset:** 29.37 GB - **Total amount of disk used:** 66.69 GB ### Dataset Summary KILT-Wikipedia: Wikipedia pre-processed for KILT. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### 2019-08-01 - **Size of downloaded dataset files:** 37.32 GB - **Size of the generated dataset:** 29.37 GB - **Total amount of disk used:** 66.69 GB An example of 'full' looks as follows. ``` { "anchors": { "end": [], "href": [], "paragraph_id": [], "start": [], "text": [], "wikipedia_id": [], "wikipedia_title": [] }, "categories": "", "history": { "pageid": 0, "parentid": 0, "pre_dump": true, "revid": 0, "timestamp": "", "url": "" }, "kilt_id": "", "text": { "paragraph": [] }, "wikidata_info": { "aliases": { "alias": [] }, "description": "", "enwikiquote_title": "", "wikidata_id": "", "wikidata_label": "", "wikipedia_title": "" }, "wikipedia_id": "", "wikipedia_title": "" } ``` ### Data Fields The data fields are the same among all splits. #### 2019-08-01 - `kilt_id`: a `string` feature. - `wikipedia_id`: a `string` feature. - `wikipedia_title`: a `string` feature. - `text`: a dictionary feature containing: - `paragraph`: a `string` feature. - `anchors`: a dictionary feature containing: - `paragraph_id`: a `int32` feature. - `start`: a `int32` feature. - `end`: a `int32` feature. - `text`: a `string` feature. - `href`: a `string` feature. - `wikipedia_title`: a `string` feature. - `wikipedia_id`: a `string` feature. - `categories`: a `string` feature. - `description`: a `string` feature. - `enwikiquote_title`: a `string` feature. - `wikidata_id`: a `string` feature. - `wikidata_label`: a `string` feature. - `wikipedia_title`: a `string` feature. - `aliases`: a dictionary feature containing: - `alias`: a `string` feature. - `pageid`: a `int32` feature. - `parentid`: a `int32` feature. - `revid`: a `int32` feature. - `pre_dump`: a `bool` feature. - `timestamp`: a `string` feature. - `url`: a `string` feature. ### Data Splits | name | full | |----------|------:| |2019-08-01|5903530| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{fb_kilt, author = {Fabio Petroni and Aleksandra Piktus and Angela Fan and Patrick Lewis and Majid Yazdani and Nicola De Cao and James Thorne and Yacine Jernite and Vassilis Plachouras and Tim Rockt"aschel and Sebastian Riedel}, title = {{KILT:} a {B}enchmark for {K}nowledge {I}ntensive {L}anguage {T}asks}, journal = {CoRR}, archivePrefix = {arXiv}, year = {2020}, ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset.
8,434
[ [ -0.058013916015625, -0.037628173828125, 0.00980377197265625, 0.005077362060546875, -0.0156707763671875, -0.0026645660400390625, -0.02825927734375, -0.0244903564453125, 0.049285888671875, 0.0316162109375, -0.052825927734375, -0.0684814453125, -0.040313720703125, ...
allenai/peer_read
2022-11-18T21:37:46.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "acceptability-classification", "arxiv:1804.09635", "region:us" ]
allenai
PearRead is a dataset of scientific peer reviews available to help researchers study this important artifact. The dataset consists of over 14K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR, as well as over 10K textual peer reviews written by experts for a subset of the papers.
@inproceedings{kang18naacl, title = {A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications}, author = {Dongyeop Kang and Waleed Ammar and Bhavana Dalvi and Madeleine van Zuylen and Sebastian Kohlmeier and Eduard Hovy and Roy Schwartz}, booktitle = {Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL)}, address = {New Orleans, USA}, month = {June}, url = {https://arxiv.org/abs/1804.09635}, year = {2018} }
3
134
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: [] paperswithcode_id: peerread pretty_name: PeerRead tags: - acceptability-classification dataset_info: - config_name: parsed_pdfs features: - name: name dtype: string - name: metadata struct: - name: source dtype: string - name: title dtype: string - name: authors sequence: string - name: emails sequence: string - name: sections sequence: - name: heading dtype: string - name: text dtype: string - name: references sequence: - name: title dtype: string - name: author sequence: string - name: venue dtype: string - name: citeRegEx dtype: string - name: shortCiteRegEx dtype: string - name: year dtype: int32 - name: referenceMentions sequence: - name: referenceID dtype: int32 - name: context dtype: string - name: startOffset dtype: int32 - name: endOffset dtype: int32 - name: year dtype: int32 - name: abstractText dtype: string - name: creator dtype: string splits: - name: train num_bytes: 571263679 num_examples: 11090 - name: test num_bytes: 34284777 num_examples: 637 - name: validation num_bytes: 32488519 num_examples: 637 download_size: 1246688292 dataset_size: 638036975 - config_name: reviews features: - name: id dtype: string - name: conference dtype: string - name: comments dtype: string - name: subjects dtype: string - name: version dtype: string - name: date_of_submission dtype: string - name: title dtype: string - name: authors sequence: string - name: accepted dtype: bool - name: abstract dtype: string - name: histories sequence: sequence: string - name: reviews sequence: - name: date dtype: string - name: title dtype: string - name: other_keys dtype: string - name: originality dtype: string - name: comments dtype: string - name: is_meta_review dtype: bool - name: is_annotated dtype: bool - name: recommendation dtype: string - name: replicability dtype: string - name: presentation_format dtype: string - name: clarity dtype: string - name: meaningful_comparison dtype: string - name: substance dtype: string - name: reviewer_confidence dtype: string - name: soundness_correctness dtype: string - name: appropriateness dtype: string - name: impact dtype: string splits: - name: train num_bytes: 15234922 num_examples: 11090 - name: test num_bytes: 878906 num_examples: 637 - name: validation num_bytes: 864799 num_examples: 637 download_size: 1246688292 dataset_size: 16978627 --- # Dataset Card for peer_read ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://arxiv.org/abs/1804.09635 - **Repository:** https://github.com/allenai/PeerRead - **Paper:** https://arxiv.org/pdf/1804.09635.pdf - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary PearRead is a dataset of scientific peer reviews available to help researchers study this important artifact. The dataset consists of over 14K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR, as well as over 10K textual peer reviews written by experts for a subset of the papers. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages en-English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields #### parsed_pdfs - `name`: `string` Filename in the dataset - `metadata`: `dict` Paper metadata - `source`: `string` Paper source - `authors`: `list<string>` List of paper authors - `title`: `string` Paper title - `sections`: `list<dict>` List of section heading and corresponding description - `heading`: `string` Section heading - `text`: `string` Section description - `references`: `string` List of references - `title`: `string` Title of reference paper - `author`: `list<string>` List of reference paper authors - `venue`: `string` Reference venue - `citeRegEx`: `string` Reference citeRegEx - `shortCiteRegEx`: `string` Reference shortCiteRegEx - `year`: `int` Reference publish year - `referenceMentions`: `list<string>` List of reference mentions - `referenceID`: `int` Reference mention ID - `context`: `string` Reference mention context - `startOffset`: `int` Reference startOffset - `endOffset`: `int` Reference endOffset - `year`: `int` Paper publish year - `abstractText`: `string` Paper abstract - `creator`: `string` Paper creator #### reviews - `id`: `int` Review ID - `conference`: `string` Conference name - `comments`: `string` Review comments - `subjects`: `string` Review subjects - `version`: `string` Review version - `date_of_submission`: `string` Submission date - `title`: `string` Paper title - `authors`: `list<string>` List of paper authors - `accepted`: `bool` Paper accepted flag - `abstract`: `string` Paper abstract - `histories`: `list<string>` Paper details with link - `reviews`: `dict` Paper reviews - `date`: `string` Date of review - `title`: `string` Paper title - `other_keys`: `string` Reviewer other details - `originality`: `string` Originality score - `comments`: `string` Reviewer comments - `is_meta_review`: `bool` Review type flag - `recommendation`: `string` Reviewer recommendation - `replicability`: `string` Replicability score - `presentation_format`: `string` Presentation type - `clarity`: `string` Clarity score - `meaningful_comparison`: `string` Meaningful comparison score - `substance`: `string` Substance score - `reviewer_confidence`: `string` Reviewer confidence score - `soundness_correctness`: `string` Soundness correctness score - `appropriateness`: `string` Appropriateness score - `impact`: `string` Impact score ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Dongyeop Kang, Waleed Ammar, Bhavana Dalvi Mishra, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard Hovy, Roy Schwartz ### Licensing Information [More Information Needed] ### Citation Information @inproceedings{kang18naacl, title = {A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications}, author = {Dongyeop Kang and Waleed Ammar and Bhavana Dalvi and Madeleine van Zuylen and Sebastian Kohlmeier and Eduard Hovy and Roy Schwartz}, booktitle = {Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL)}, address = {New Orleans, USA}, month = {June}, url = {https://arxiv.org/abs/1804.09635}, year = {2018} } ### Contributions Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset.
9,062
[ [ -0.04351806640625, -0.0286407470703125, 0.029022216796875, 0.0174560546875, -0.018463134765625, -0.0011444091796875, -0.01275634765625, -0.0241241455078125, 0.03790283203125, 0.031463623046875, -0.044097900390625, -0.0574951171875, -0.047607421875, 0.0371398...
taskmaster1
2022-11-18T21:50:41.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1...
null
Taskmaster-1 is a goal-oriented conversational dataset. It includes 13,215 task-based dialogs comprising six domains. Two procedures were used to create this collection, each with unique advantages. The first involves a two-person, spoken "Wizard of Oz" (WOz) approach in which trained agents and crowdsourced workers interact to complete the task while the second is "self-dialog" in which crowdsourced workers write the entire dialog themselves.
@inproceedings{48484, title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset}, author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik}, year = {2019} }
1
134
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - dialogue-modeling paperswithcode_id: taskmaster-1 pretty_name: Taskmaster-1 dataset_info: - config_name: one_person_dialogs features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 18037058 num_examples: 6168 - name: validation num_bytes: 2239656 num_examples: 770 - name: test num_bytes: 2224163 num_examples: 770 download_size: 103276427 dataset_size: 22500877 - config_name: woz_dialogs features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 13028593 num_examples: 5507 download_size: 103276427 dataset_size: 13028593 --- # Dataset Card for Taskmaster-1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Taskmaster-1](https://research.google/tools/datasets/taskmaster-1/) - **Repository:** [GitHub](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019) - **Paper:** [Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset](https://arxiv.org/abs/1909.05358) - **Leaderboard:** N/A - **Point of Contact:** [Taskmaster Googlegroup](taskmaster-datasets@googlegroups.com) ### Dataset Summary Taskmaster-1 is a goal-oriented conversational dataset. It includes 13,215 task-based dialogs comprising six domains. Two procedures were used to create this collection, each with unique advantages. The first involves a two-person, spoken "Wizard of Oz" (WOz) approach in which trained agents and crowdsourced workers interact to complete the task while the second is "self-dialog" in which crowdsourced workers write the entire dialog themselves. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is in English language. ## Dataset Structure ### Data Instances A typical example looks like this ``` { "conversation_id":"dlg-336c8165-068e-4b4b-803d-18ef0676f668", "instruction_id":"restaurant-table-2", "utterances":[ { "index":0, "segments":[ ], "speaker":"USER", "text":"Hi, I'm looking for a place that sells spicy wet hotdogs, can you think of any?" }, { "index":1, "segments":[ { "annotations":[ { "name":"restaurant_reservation.name.restaurant.reject" } ], "end_index":37, "start_index":16, "text":"Spicy Wet Hotdogs LLC" } ], "speaker":"ASSISTANT", "text":"You might enjoy Spicy Wet Hotdogs LLC." }, { "index":2, "segments":[ ], "speaker":"USER", "text":"That sounds really good, can you make me a reservation?" }, { "index":3, "segments":[ ], "speaker":"ASSISTANT", "text":"Certainly, when would you like a reservation?" }, { "index":4, "segments":[ { "annotations":[ { "name":"restaurant_reservation.num.guests" }, { "name":"restaurant_reservation.num.guests" } ], "end_index":20, "start_index":18, "text":"50" } ], "speaker":"USER", "text":"I have a party of 50 who want a really sloppy dog on Saturday at noon." } ] } ``` ### Data Fields Each conversation in the data file has the following structure: - `conversation_id`: A universally unique identifier with the prefix 'dlg-'. The ID has no meaning. - `utterances`: A list of utterances that make up the conversation. - `instruction_id`: A reference to the file(s) containing the user (and, if applicable, agent) instructions for this conversation. Each utterance has the following fields: - `index`: A 0-based index indicating the order of the utterances in the conversation. - `speaker`: Either USER or ASSISTANT, indicating which role generated this utterance. - `text`: The raw text of the utterance. In case of self dialogs (one_person_dialogs), this is written by the crowdsourced worker. In case of the WOz dialogs, 'ASSISTANT' turns are written and 'USER' turns are transcribed from the spoken recordings of crowdsourced workers. - `segments`: A list of various text spans with semantic annotations. Each segment has the following fields: - `start_index`: The position of the start of the annotation in the utterance text. - `end_index`: The position of the end of the annotation in the utterance text. - `text`: The raw text that has been annotated. - `annotations`: A list of annotation details for this segment. Each annotation has a single field: - `name`: The annotation name. ### Data Splits - one_person_dialogs The data in `one_person_dialogs` config is split into `train`, `dev` and `test` splits. | | train | validation | test | |--------------|-------:|------------:|------:| | N. Instances | 6168 | 770 | 770 | - woz_dialogs The data in `woz_dialogs` config has no default splits. | | train | |--------------|-------:| | N. Instances | 5507 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is licensed under `Creative Commons Attribution 4.0 License` ### Citation Information [More Information Needed] ``` @inproceedings{48484, title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset}, author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik}, year = {2019} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
8,695
[ [ -0.032684326171875, -0.0731201171875, 0.0107879638671875, 0.0076141357421875, -0.0024814605712890625, -0.00018668174743652344, -0.0311279296875, -0.0260009765625, 0.0274200439453125, 0.055877685546875, -0.076904296875, -0.073486328125, -0.036102294921875, 0....
distil-whisper/ami-ihm
2023-09-25T10:30:14.000Z
[ "task_categories:automatic-speech-recognition", "language:en", "license:cc-by-4.0", "region:us" ]
distil-whisper
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers. \n
@inproceedings{10.1007/11677482_3, author = {Carletta, Jean and Ashby, Simone and Bourban, Sebastien and Flynn, Mike and Guillemot, Mael and Hain, Thomas and Kadlec, Jaroslav and Karaiskos, Vasilis and Kraaij, Wessel and Kronenthal, Melissa and Lathoud, Guillaume and Lincoln, Mike and Lisowska, Agnes and McCowan, Iain and Post, Wilfried and Reidsma, Dennis and Wellner, Pierre}, title = {The AMI Meeting Corpus: A Pre-Announcement}, year = {2005}, isbn = {3540325492}, publisher = {Springer-Verlag}, address = {Berlin, Heidelberg}, url = {https://doi.org/10.1007/11677482_3}, doi = {10.1007/11677482_3}, abstract = {The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly. Some of the meetings it contains are naturally occurring, and some are elicited, particularly using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The corpus is being recorded using a wide range of devices including close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, and individual pens, all of which produce output signals that are synchronized with each other. It is also being hand-annotated for many different phenomena, including orthographic transcription, discourse properties such as named entities and dialogue acts, summaries, emotions, and some head and hand gestures. We describe the data set, including the rationale behind using elicited material, and explain how the material is being recorded, transcribed and annotated.}, booktitle = {Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction}, pages = {28–39}, numpages = {12}, location = {Edinburgh, UK}, series = {MLMI'05} }
0
134
2023-04-10T12:57:58
--- license: cc-by-4.0 task_categories: - automatic-speech-recognition language: - en -pretty_name: AMI IHM --- # Distil Whisper: AMI IHM This is a variant of the [AMI IHM](https://huggingface.co/datasets/edinburghcstr/ami) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/edinburghcstr/ami). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/ami-ihm", "ihm") # take the first sample of the validation set sample = dataset["validation"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/ami-ihm", "ihm", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under cc-by-4.0.
1,999
[ [ -0.0157012939453125, -0.04168701171875, 0.01262664794921875, 0.0293121337890625, -0.0165557861328125, 0.004451751708984375, -0.006481170654296875, -0.016082763671875, 0.026763916015625, 0.03131103515625, -0.062286376953125, -0.033477783203125, -0.04876708984375,...
ttxy/cn_ner
2023-05-24T08:56:19.000Z
[ "task_categories:token-classification", "language:code", "license:bsd", "ner", "region:us" ]
ttxy
null
null
0
134
2023-05-24T06:27:30
--- language: - code pretty_name: "Chinese ner dataseet" tags: - ner license: "bsd" task_categories: - token-classification --- 来源 https://github.com/liucongg/NLPDataSet * 从网上收集数据,将CMeEE数据集、IMCS21_task1数据集、CCKS2017_task2数据集、CCKS2018_task1数据集、CCKS2019_task1数据集、CLUENER2020数据集、MSRA数据集、NLPCC2018_task4数据集、CCFBDCI数据集、MMC数据集、WanChuang数据集、PeopleDairy1998数据集、PeopleDairy2004数据集、GAIIC2022_task2数据集、WeiBo数据集、ECommerce数据集、FinanceSina数据集、BoSon数据集、Resume数据集、Bank数据集、FNED数据集和DLNER数据集等22个数据集进行整理清洗,构建一个较完善的中文NER数据集。 * 数据集清洗时,仅进行了简单地规则清洗,并将格式进行了统一化,标签为“BIO”。 * 处理后数据集详细信息,见[数据集描述](https://zhuanlan.zhihu.com/p/529541521)。 * 数据集由[NJUST-TB](https://github.com/Swag-tb)一起整理。 * 由于部分数据包含嵌套实体的情况,所以转换成BIO标签时,长实体会覆盖短实体。 | 数据 | 原始数据/项目地址 | 样本个数 | 类别 | 原始数据描述 | | ------ | ------ | ------ | ------ | ------ | | CMeEE数据集 | [地址](http://www.cips-chip.org.cn/2021/CBLUE) | 20000条 | sym、dep、dru、pro、equ、dis、mic、ite和bod | 中文医疗信息处理挑战榜CBLUE中医学实体识别数据集 | | IMCS21_task1数据集 | [地址](http://www.fudan-disc.com/sharedtask/imcs21/index.html?spm=5176.12282016.0.0.140e6d92ypyW1r) | 98452条 | Operation、Drug_Category、Medical_Examination、Symptom和Drug | CCL2021第一届智能对话诊疗评测比赛命名实体识别数据集| | CCKS2017_task2数据集 | [地址](https://www.biendata.xyz/competition/CCKS2017_2/) | 2229条 | symp、dise、chec、body和cure | CCKS2017面向电子病历的命名实体识别数据集 | | CCKS2018_task1数据集 | [地址](https://www.biendata.xyz/competition/CCKS2018_1/) | 797条 | 症状和体征、检查和检验、治疗、疾病和诊断、身体部位 | CCKS2018面向中文电子病历的命名实体识别数据集 | | CCKS2019_task1数据集 | [地址](http://openkg.cn/dataset/yidu-s4k) | 1379条 | 解剖部位、手术、疾病和诊断、药物、实验室检验、影像检查 | CCKS2019面向中文电子病历的命名实体识别数据集 | | CLUENER2020数据集 | [地址](https://github.com/CLUEbenchmark/CLUENER2020) | 12091条 | game、organization、government、movie、name、book、company、scene、position和address | CLUENER2020数据集 | | MSRA数据集 | [地址](https://www.msra.cn/) | 48442条 | LOC、ORG和PER | MSRA微软亚洲研究院开源命名实体识别数据集 | | NLPCC2018_task4数据集 | [地址](http://tcci.ccf.org.cn/conference/2018/taskdata.php) | 21352条 | language、origin、theme、custom_destination、style、phone_num、destination、contact_name、age、singer、song、instrument、toplist、scene和emotion | 任务型对话系统数据数据集 | | CCFBDCI数据集 | [地址](https://www.datafountain.cn/competitions/510) |15723条 | LOC、GPE、ORG和PER | 中文命名实体识别算法鲁棒性评测数据集 | | MMC数据集 | [地址](https://tianchi.aliyun.com/competition/entrance/231687/information) | 3498条 | Level、Method、Disease、Drug、Frequency、Amount、Operation、Pathogenesis、Test_items、Anatomy、Symptom、Duration、Treatment、Test_Value、ADE、Class、Test和Reason | 瑞金医院MMC人工智能辅助构建知识图谱大赛数据集 | | WanChuang数据集 | [地址](https://tianchi.aliyun.com/competition/entrance/531827/introduction) | 1255条 | 药物剂型、疾病分组、人群、药品分组、中药功效、症状、疾病、药物成分、药物性味、食物分组、食物、证候和药品 | "万创杯”中医药天池大数据竞赛—智慧中医药应用创新挑战赛数据集 | | PeopleDairy1998数据集 | [地址]() | 27818条 | LOC、ORG和PER | 人民日报1998数据集 | | PeopleDairy2004数据集 | [地址]() | 286268条 | LOC、ORG、PER、T | 人民日报2004数据集 | | GAIIC2022_task2数据集 | [地址](https://www.heywhale.com/home/competition/620b34ed28270b0017b823ad/content/2) | 40000条 | 该比赛共有52种类别 | 商品标题实体识别数据集 | | WeiBo数据集 | [地址](https://github.com/hltcoe/golden-horse) | 1890条 | LOC.NAM、LOC.NOM、PER.NAM、ORG.NOM、ORG.NAM、GPE.NAM和PER.NOM | 社交媒体中文命名实体识别数据集 | | ECommerce数据集 | [地址](https://github.com/allanj/ner_incomplete_annotation) | 7998条 | MISC、XH、HPPX和HCCX | 面向电商的命名实体识别数据集 | | FinanceSina数据集 | [地址](https://github.com/jiesutd/LatticeLSTM) | 1579条 | LOC、GPE、ORG和PER | 新浪财经爬取中文命名实体识别数据集 | | BoSon数据集 | [地址](https://github.com/bosondata) | 2000条 | time、product_name、person_name、location、org_name和company_name | 玻森中文命名实体识别数据集 | | Resume数据集 | [地址](https://github.com/jiesutd/LatticeLSTM/tree/master/ResumeNER) | 4761条 | NAME、EDU、LOC、ORG、PRO、TITLE、CONT和RACE | 中国股市上市公司高管的简历 | | Bank数据集 | [地址](https://www.heywhale.com/mw/dataset/617969ec768f3b0017862990/file) | 10000条 | BANK、COMMENTS_ADJ、COMMENTS_N和PRODUCT | 银行借贷数据数据集 | | FNED数据集 | [地址](https://www.datafountain.cn/competitions/561/datasets) | 10500条 | LOC、GPE、ORG、EQU、TIME、FAC和PER | 高鲁棒性要求下的领域事件检测数据集 | | DLNER数据集 | [地址](https://github.com/lancopku/Chinese-Literature-NER-RE-Dataset) | 28897条 | Location、Thing、Abstract、Organization、Metric、Time、Physical、Person和Term | 语篇级命名实体识别数据集 | - 清洗及格式转换后的数据,下载链接如下:[百度云](https://pan.baidu.com/s/1VvbvWPv3eM4MXsv_nlDSSA) / 提取码:4sea - 注意:部分嵌套实体的数据,使用长实体覆盖了短实体,有嵌套实体需求的同学,请自行使用原始数据。
4,189
[ [ -0.042877197265625, -0.029754638671875, 0.0163116455078125, 0.022552490234375, -0.034332275390625, -0.0027751922607421875, -0.00820159912109375, -0.034515380859375, 0.049468994140625, 0.0181121826171875, -0.03228759765625, -0.0650634765625, -0.034942626953125, ...
haitengzhao/molecule_property_instruction
2023-07-13T10:30:29.000Z
[ "task_categories:question-answering", "language:en", "license:afl-3.0", "chemistry", "biology", "region:us" ]
haitengzhao
null
null
3
134
2023-07-09T07:36:09
--- dataset_info: features: - name: graph dtype: string - name: text sequence: string - name: label dtype: string - name: dataset_name dtype: string - name: task_index dtype: string - name: molecule_index dtype: string - name: split dtype: string splits: - name: esol num_bytes: 542831 num_examples: 1128 - name: lipo num_bytes: 1519836 num_examples: 4200 - name: freesolv num_bytes: 527615 num_examples: 642 - name: bace num_bytes: 5103112 num_examples: 1513 - name: hiv num_bytes: 215094514 num_examples: 41127 - name: muv num_bytes: 594798639 num_examples: 249886 - name: tox21 num_bytes: 121153396 num_examples: 77946 - name: toxcast num_bytes: 1543462519 num_examples: 1490412 - name: bbbp num_bytes: 2521597 num_examples: 2039 - name: cyp450 num_bytes: 30602477 num_examples: 53178 - name: chembl_zero_shot num_bytes: 89499667 num_examples: 180229 - name: chembl_pretraining num_bytes: 12246285194 num_examples: 23874346 - name: pcba num_bytes: 21761726609 num_examples: 34017170 download_size: 2163300521 dataset_size: 36612838006 license: afl-3.0 task_categories: - question-answering language: - en tags: - chemistry - biology pretty_name: p --- # Dataset Card for "molecule_property_instruction" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,518
[ [ -0.034027099609375, -0.0396728515625, 0.0099334716796875, 0.00006121397018432617, -0.0020961761474609375, 0.0006284713745117188, -0.000560760498046875, 0.005069732666015625, 0.0390625, 0.035675048828125, -0.045318603515625, -0.061492919921875, -0.04034423828125,...
HydraLM/biology_dataset_standardized
2023-07-27T17:14:13.000Z
[ "region:us" ]
HydraLM
null
null
0
134
2023-07-27T17:13:47
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
YaHi/english_AAAI_Math
2023-10-09T21:06:27.000Z
[ "region:us" ]
YaHi
null
null
0
134
2023-10-09T21:06:26
--- dataset_info: features: - name: dataset_version dtype: timestamp[s] - name: queId dtype: string - name: difficulty dtype: string - name: qtype dtype: string - name: problem dtype: string - name: knowledge_point_routes sequence: string splits: - name: train num_bytes: 2228695 num_examples: 5927 download_size: 854269 dataset_size: 2228695 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "english_AAAI_Math" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
658
[ [ -0.042633056640625, -0.0251617431640625, 0.004207611083984375, 0.02685546875, 0.0026073455810546875, 0.005764007568359375, 0.0091094970703125, -0.0153045654296875, 0.060638427734375, 0.013427734375, -0.054718017578125, -0.0526123046875, -0.038787841796875, -...
ar_res_reviews
2023-01-25T14:26:30.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:unknown", "region:us" ]
null
Dataset of 8364 restaurant reviews scrapped from qaym.com in Arabic for sentiment analysis
@InProceedings{10.1007/978-3-319-18117-2_2, author="ElSahar, Hady and El-Beltagy, Samhaa R.", editor="Gelbukh, Alexander", title="Building Large Arabic Multi-domain Resources for Sentiment Analysis", booktitle="Computational Linguistics and Intelligent Text Processing", year="2015", publisher="Springer International Publishing", address="Cham", pages="23--34", isbn="978-3-319-18117-2" }
3
133
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - ar license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: ArRestReviews dataset_info: features: - name: polarity dtype: class_label: names: '0': negative '1': positive - name: text dtype: string - name: restaurant_id dtype: string - name: user_id dtype: string splits: - name: train num_bytes: 3617097 num_examples: 8364 download_size: 3503230 dataset_size: 3617097 --- # Dataset Card for ArRestReviews ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces) - **Repository:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces) - **Paper:** [ Building Large Arabic Multi-domain Resources for Sentiment Analysis](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces/blob/master/Paper%20-%20Building%20Large%20Arabic%20Multi-domain%20Resources%20for%20Sentiment%20Analysis.pdf) - **Point of Contact:** [hady elsahar](hadyelsahar@gmail.com) ### Dataset Summary Dataset of 8364 restaurant reviews from qaym.com in Arabic for sentiment analysis ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances A typical data point comprises of the following: - "polarity": which is a string value of either 0 or 1 indicating the sentiment around the review - "text": is the review plain text of a restaurant in Arabic - "restaurant_id": the restaurant ID on the website - "user_id": the user ID on the website example: ``` { 'polarity': 0, # negative 'restaurant_id': '1412', 'text': 'عادي جدا مامن زود', 'user_id': '21294' } ``` ### Data Fields - "polarity": is a string value of either 0 or 1 indicating the sentiment around the review - "text": is the review plain text of a restaurant in Arabic - "restaurant_id": the restaurant ID on the website (string) - "user_id": the user ID on the website (string) ### Data Splits The dataset is not split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Contains 8364 restaurant reviews from qaym.com #### Who are the source language producers? From tweeter. ### Annotations The polarity field provides a label of 1 or -1 pertaining to the sentiment of the review #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Discussion of Social Impact and Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @InProceedings{10.1007/978-3-319-18117-2_2, author="ElSahar, Hady and El-Beltagy, Samhaa R.", editor="Gelbukh, Alexander", title="Building Large Arabic Multi-domain Resources for Sentiment Analysis", booktitle="Computational Linguistics and Intelligent Text Processing", year="2015", publisher="Springer International Publishing", address="Cham", pages="23--34", isbn="978-3-319-18117-2" } ### Contributions Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset.
4,777
[ [ -0.04046630859375, -0.0303955078125, 0.01763916015625, 0.0229339599609375, -0.030303955078125, 0.011810302734375, -0.0108795166015625, -0.0214691162109375, 0.03228759765625, 0.043304443359375, -0.046234130859375, -0.08758544921875, -0.04815673828125, 0.01922...
code_x_glue_cc_code_completion_token
2023-06-12T08:13:31.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "l...
null
Predict next code token given context of previous tokens. Models are evaluated by token level accuracy. Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.
@article{raychev2016probabilistic, title={Probabilistic Model for Code with Decision Trees}, author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin}, journal={ACM SIGPLAN Notices}, pages={731--747}, year={2016}, publisher={ACM New York, NY, USA} } @inproceedings{allamanis2013mining, title={Mining Source Code Repositories at Massive Scale using Language Modeling}, author={Allamanis, Miltiadis and Sutton, Charles}, booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)}, pages={207--216}, year={2013}, organization={IEEE} } @dataset{rafael_michael_karampatsis_2020_3628665, author = {Rafael - Michael Karampatsis and Hlib Babii and Romain Robbes and Charles Sutton and Andrea Janes}, title = {Preprocessed Java Code Corpus}, month = jan, year = 2020, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.3628665}, url = {https://doi.org/10.5281/zenodo.3628665} }
1
133
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - code license: - c-uda multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling pretty_name: CodeXGlueCcCodeCompletionToken dataset_info: - config_name: java features: - name: id dtype: int32 - name: code sequence: string splits: - name: train num_bytes: 128312061 num_examples: 12934 - name: validation num_bytes: 30259174 num_examples: 7189 - name: test num_bytes: 43027956 num_examples: 8268 download_size: 126856519 dataset_size: 201599191 - config_name: python features: - name: id dtype: int32 - name: path dtype: string - name: code sequence: string splits: - name: train num_bytes: 684319575 num_examples: 100000 - name: test num_bytes: 333978088 num_examples: 50000 download_size: 199067128 dataset_size: 1018297663 --- # Dataset Card for "code_x_glue_cc_code_completion_token" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token ### Dataset Summary CodeXGLUE CodeCompletion-token dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token Predict next code token given context of previous tokens. Models are evaluated by token level accuracy. Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types. ### Supported Tasks and Leaderboards - `language-modeling`: The dataset can be used to train a model for completing single code tokens. ### Languages - Java **programming** language - Python **programming** language ## Dataset Structure ### Data Instances #### java An example of 'test' looks as follows. ``` { "code": ["<s>", "package", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "demo", ";", "import", "java", ".", "io", ".", "BufferedReader", ";", "import", "java", ".", "io", ".", "ByteArrayInputStream", ";", "import", "java", ".", "io", ".", "IOException", ";", "import", "java", ".", "io", ".", "InputStreamReader", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "Clara", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "inflater", ".", "LayoutInflaterException", ";", "import", "com", ".", "vaadin", ".", "Application", ";", "import", "com", ".", "vaadin", ".", "terminal", ".", "ThemeResource", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ".", "ClickEvent", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Component", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Embedded", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalSplitPanel", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "TextArea", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "VerticalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ".", "Notification", ";", "@", "SuppressWarnings", "(", "\"serial\"", ")", "public", "class", "DemoApplication", "extends", "Application", "{", "private", "DemoController", "controller", ";", "private", "TextArea", "xmlArea", ";", "private", "HorizontalSplitPanel", "split", "=", "new", "HorizontalSplitPanel", "(", ")", ";", "private", "Window", "mainWindow", ";", "@", "Override", "public", "void", "init", "(", ")", "{", "setTheme", "(", "\"clara\"", ")", ";", "setMainWindow", "(", "mainWindow", "=", "new", "Window", "(", ")", ")", ";", "controller", "=", "new", "DemoController", "(", "mainWindow", ")", ";", "mainWindow", ".", "setContent", "(", "split", ")", ";", "VerticalLayout", "editor", "=", "new", "VerticalLayout", "(", ")", ";", "editor", ".", "setSpacing", "(", "true", ")", ";", "editor", ".", "setMargin", "(", "false", ",", "false", ",", "false", ",", "true", ")", ";", "editor", ".", "setHeight", "(", "\"100%\"", ")", ";", "editor", ".", "addComponent", "(", "xmlArea", "=", "createXmlArea", "(", ")", ")", ";", "editor", ".", "setExpandRatio", "(", "xmlArea", ",", "1.0f", ")", ";", "editor", ".", "addComponent", "(", "createUpdateButton", "(", ")", ")", ";", "HorizontalLayout", "wrapper", "=", "new", "HorizontalLayout", "(", ")", ";", "wrapper", ".", "setMargin", "(", "true", ")", ";", "wrapper", ".", "setSizeFull", "(", ")", ";", "wrapper", ".", "addComponent", "(", "createLogo", "(", ")", ")", ";", "wrapper", ".", "addComponent", "(", "editor", ")", ";", "wrapper", ".", "setExpandRatio", "(", "editor", ",", "1.0f", ")", ";", "split", ".", "setFirstComponent", "(", "wrapper", ")", ";", "updateLayout", "(", ")", ";", "}", "private", "Component", "createLogo", "(", ")", "{", "Embedded", "logo", "=", "new", "Embedded", "(", "null", ",", "new", "ThemeResource", "(", "\"\"", ")", ")", ";", "logo", ".", "setHeight", "(", "\"90px\"", ")", ";", "logo", ".", "setWidth", "(", "\"90px\"", ")", ";", "return", "logo", ";", "}", "private", "TextArea", "createXmlArea", "(", ")", "{", "TextArea", "area", "=", "new", "TextArea", "(", ")", ";", "area", ".", "setStyleName", "(", "\"xml-area\"", ")", ";", "area", ".", "setSizeFull", "(", ")", ";", "area", ".", "setValue", "(", "readStartingPoint", "(", ")", ")", ";", "return", "area", ";", "}", "private", "Button", "createUpdateButton", "(", ")", "{", "return", "new", "Button", "(", "\"Update\"", ",", "new", "Button", ".", "ClickListener", "(", ")", "{", "public", "void", "buttonClick", "(", "ClickEvent", "event", ")", "{", "updateLayout", "(", ")", ";", "}", "}", ")", ";", "}", "private", "String", "readStartingPoint", "(", ")", "{", "BufferedReader", "reader", "=", "null", ";", "try", "{", "reader", "=", "new", "BufferedReader", "(", "new", "InputStreamReader", "(", "getClass", "(", ")", ".", "getClassLoader", "(", ")", ".", "getResourceAsStream", "(", "\"\"", ")", ")", ")", ";", "StringBuilder", "xml", "=", "new", "StringBuilder", "(", ")", ";", "String", "line", ";", "while", "(", "(", "line", "=", "reader", ".", "readLine", "(", ")", ")", "!=", "null", ")", "{", "xml", ".", "append", "(", "line", ")", ";", "xml", ".", "append", "(", "\"n\"", ")", ";", "}", "return", "xml", ".", "toString", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "finally", "{", "if", "(", "reader", "!=", "null", ")", "{", "try", "{", "reader", ".", "close", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "}", "}", "return", "null", ";", "}", "private", "void", "updateLayout", "(", ")", "{", "try", "{", "Component", "c", "=", "Clara", ".", "create", "(", "new", "ByteArrayInputStream", "(", "xmlArea", ".", "getValue", "(", ")", ".", "toString", "(", ")", ".", "getBytes", "(", ")", ")", ",", "controller", ")", ";", "split", ".", "replaceComponent", "(", "split", ".", "getSecondComponent", "(", ")", ",", "c", ")", ";", "}", "catch", "(", "LayoutInflaterException", "e", ")", "{", "mainWindow", ".", "showNotification", "(", "e", ".", "getMessage", "(", ")", ",", "Notification", ".", "TYPE_ERROR_MESSAGE", ")", ";", "}", "}", "}", "</s>"], "id": 0 } ``` #### python An example of 'train' looks as follows. ``` { "code": ["<s>", "from", "bootstrap", "import", "Bootstrap", "<EOL>", "from", "fund", "import", "InstantPaymentNotificationHandler", "<EOL>", "from", "fund", "import", "ThankYouHandler", "<EOL>", "from", "view", "import", "*", "<EOL>", "mapping", "=", "[", "(", "<EOL>", "r'/'", ",", "<EOL>", "Index", "<EOL>", ")", ",", "(", "<EOL>", "r'/ipn'", ",", "<EOL>", "InstantPaymentNotificationHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/thank-you'", ",", "<EOL>", "ThankYouHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/about\\/?'", ",", "<EOL>", "About", "<EOL>", ")", ",", "(", "<EOL>", "r'/guide\\/?'", ",", "<EOL>", "Guide", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Download", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Standards", "<EOL>", ")", ",", "(", "<EOL>", "r'/community\\/?'", ",", "<EOL>", "Community", "<EOL>", ")", ",", "(", "<EOL>", "r'/news\\/?'", ",", "<EOL>", "News", "<EOL>", ")", ",", "(", "<EOL>", "r'/support\\/?'", ",", "<EOL>", "Support", "<EOL>", ")", ",", "(", "<EOL>", "r'/contact\\/?'", ",", "<EOL>", "Contact", "<EOL>", ")", ",", "(", "<EOL>", "r'/press\\/?'", ",", "<EOL>", "Press", "<EOL>", ")", ",", "(", "<EOL>", "r'/legal/terms'", ",", "<EOL>", "Terms", "<EOL>", ")", ",", "(", "<EOL>", "r'/library\\/?'", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Users", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "User", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectSuccess", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectError", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectAfterDelete", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Moderate", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Bootstrap", "<EOL>", ")", ",", "(", "<EOL>", "r'/activity'", ",", "<EOL>", "ActivityScreen", "<EOL>", ")", ",", "(", "<EOL>", "r'/txns'", ",", "<EOL>", "TxnList", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "MessageStrings", "<EOL>", ")", ",", "(", "<EOL>", "r'/.*'", ",", "<EOL>", "NotFound", "<EOL>", ")", "<EOL>", "]", "</s>"], "id": 0, "path": "00/wikihouse/urls.py\n" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### java |field name| type | description | |----------|----------------|--------------------| |id |int32 | Index of the sample| |code |Sequence[string]| Code Tokens | #### python |field name| type | description | |----------|----------------|-----------------------------| |id |int32 | Index of the sample | |path |string | Original path in the dataset| |code |Sequence[string]| Code Tokens | ### Data Splits #### java | |train|validation|test| |----|----:|---------:|---:| |java|12934| 7189|8268| #### python | |train |test | |------|-----:|----:| |python|100000|50000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{raychev2016probabilistic, title={Probabilistic Model for Code with Decision Trees}, author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin}, journal={ACM SIGPLAN Notices}, pages={731--747}, year={2016}, publisher={ACM New York, NY, USA} } @inproceedings{allamanis2013mining, title={Mining Source Code Repositories at Massive Scale using Language Modeling}, author={Allamanis, Miltiadis and Sutton, Charles}, booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)}, pages={207--216}, year={2013}, organization={IEEE} } ``` The data for "java" configuration comes from: ``` @dataset{rafael_michael_karampatsis_2020_3628665, author = {Rafael - Michael Karampatsis and Hlib Babii and Romain Robbes and Charles Sutton and Andrea Janes}, title = {Preprocessed Java Code Corpus}, month = jan, year = 2020, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.3628665}, url = {https://doi.org/10.5281/zenodo.3628665} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
14,597
[ [ -0.0379638671875, -0.040130615234375, 0.023193359375, 0.032470703125, -0.008209228515625, 0.036163330078125, 0.007144927978515625, -0.01474761962890625, 0.035003662109375, 0.0275115966796875, -0.053070068359375, -0.0562744140625, -0.0318603515625, -0.0079269...
med_hop
2022-11-03T16:16:32.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "multi-hop", "arxiv:1710.06481"...
null
MedHop is based on research paper abstracts from PubMed, and the queries are about interactions between pairs of drugs. The correct answer has to be inferred by combining information from a chain of reactions of drugs and proteins.
@misc{welbl2018constructing, title={Constructing Datasets for Multi-hop Reading Comprehension Across Documents}, author={Johannes Welbl and Pontus Stenetorp and Sebastian Riedel}, year={2018}, eprint={1710.06481}, archivePrefix={arXiv}, primaryClass={cs.CL} }
2
133
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: medhop pretty_name: MedHop tags: - multi-hop dataset_info: - config_name: original features: - name: id dtype: string - name: query dtype: string - name: answer dtype: string - name: candidates sequence: string - name: supports sequence: string splits: - name: train num_bytes: 93937322 num_examples: 1620 - name: validation num_bytes: 16461640 num_examples: 342 download_size: 339843061 dataset_size: 110398962 - config_name: masked features: - name: id dtype: string - name: question dtype: string - name: answer dtype: string - name: candidates sequence: string - name: supports sequence: string splits: - name: train num_bytes: 95813584 num_examples: 1620 - name: validation num_bytes: 16800570 num_examples: 342 download_size: 339843061 dataset_size: 112614154 --- # Dataset Card for MedHop ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [QAngaroo](http://qangaroo.cs.ucl.ac.uk/) - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]() - **Paper:** [Constructing Datasets for Multi-hop Reading Comprehension Across Documents](https://arxiv.org/abs/1710.06481) - **Leaderboard:** [leaderboard](http://qangaroo.cs.ucl.ac.uk/leaderboard.html) - **Point of Contact:** [Johannes Welbl](j.welbl@cs.ucl.ac.uk) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
3,980
[ [ -0.0302276611328125, -0.032989501953125, 0.0146484375, 0.01336669921875, -0.01184844970703125, 0.016387939453125, -0.0091094970703125, -0.0293731689453125, 0.039337158203125, 0.04901123046875, -0.07269287109375, -0.06353759765625, -0.04180908203125, 0.020584...
style_change_detection
2023-04-05T13:41:00.000Z
[ "region:us" ]
null
The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Access to the dataset needs to be requested from zenodo.
@inproceedings{bevendorff2020shared, title={Shared Tasks on Authorship Analysis at PAN 2020}, author={Bevendorff, Janek and Ghanem, Bilal and Giachanou, Anastasia and Kestemont, Mike and Manjavacas, Enrique and Potthast, Martin and Rangel, Francisco and Rosso, Paolo and Specht, G{\"u}nther and Stamatatos, Efstathios and others}, booktitle={European Conference on Information Retrieval}, pages={508--516}, year={2020}, organization={Springer} }
0
133
2022-03-02T23:29:22
--- paperswithcode_id: null pretty_name: StyleChangeDetection dataset_info: - config_name: narrow features: - name: id dtype: string - name: text dtype: string - name: authors dtype: int32 - name: structure sequence: string - name: site dtype: string - name: multi-author dtype: bool - name: changes sequence: bool splits: - name: train num_bytes: 40499150 num_examples: 3418 - name: validation num_bytes: 20447137 num_examples: 1713 download_size: 0 dataset_size: 60946287 - config_name: wide features: - name: id dtype: string - name: text dtype: string - name: authors dtype: int32 - name: structure sequence: string - name: site dtype: string - name: multi-author dtype: bool - name: changes sequence: bool splits: - name: train num_bytes: 97403392 num_examples: 8030 - name: validation num_bytes: 48850089 num_examples: 4019 download_size: 0 dataset_size: 146253481 --- # Dataset Card for "style_change_detection" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://pan.webis.de/clef20/pan20-web/style-change-detection.html](https://pan.webis.de/clef20/pan20-web/style-change-detection.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 207.20 MB - **Total amount of disk used:** 207.20 MB ### Dataset Summary The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Access to the dataset needs to be requested from zenodo. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### narrow - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 60.94 MB - **Total amount of disk used:** 60.94 MB An example of 'validation' looks as follows. ``` { "authors": 2, "changes": [false, false, true, false], "id": "2", "multi-author": true, "site": "exampleSite", "structure": ["A1", "A2"], "text": "This is text from example problem 2.\n" } ``` #### wide - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 146.26 MB - **Total amount of disk used:** 146.26 MB An example of 'train' looks as follows. ``` { "authors": 2, "changes": [false, false, true, false], "id": "2", "multi-author": true, "site": "exampleSite", "structure": ["A1", "A2"], "text": "This is text from example problem 2.\n" } ``` ### Data Fields The data fields are the same among all splits. #### narrow - `id`: a `string` feature. - `text`: a `string` feature. - `authors`: a `int32` feature. - `structure`: a `list` of `string` features. - `site`: a `string` feature. - `multi-author`: a `bool` feature. - `changes`: a `list` of `bool` features. #### wide - `id`: a `string` feature. - `text`: a `string` feature. - `authors`: a `int32` feature. - `structure`: a `list` of `string` features. - `site`: a `string` feature. - `multi-author`: a `bool` feature. - `changes`: a `list` of `bool` features. ### Data Splits | name |train|validation| |------|----:|---------:| |narrow| 3418| 1713| |wide | 8030| 4019| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{bevendorff2020shared, title={Shared Tasks on Authorship Analysis at PAN 2020}, author={Bevendorff, Janek and Ghanem, Bilal and Giachanou, Anastasia and Kestemont, Mike and Manjavacas, Enrique and Potthast, Martin and Rangel, Francisco and Rosso, Paolo and Specht, G{"u}nther and Stamatatos, Efstathios and others}, booktitle={European Conference on Information Retrieval}, pages={508--516}, year={2020}, organization={Springer} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
7,812
[ [ -0.0411376953125, -0.035308837890625, 0.0196533203125, 0.01445770263671875, -0.01210784912109375, -0.003078460693359375, -0.027740478515625, -0.03314208984375, 0.04345703125, 0.03582763671875, -0.059234619140625, -0.06494140625, -0.04803466796875, 0.01930236...
thai_toxicity_tweet
2023-01-25T14:45:38.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:th", "license:cc-by-nc-3.0", "region:us" ]
null
Thai Toxicity Tweet Corpus contains 3,300 tweets annotated by humans with guidelines including a 44-word dictionary. The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing target, and word sense ambiguity. Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`. Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1).
@article{sirihattasak2019annotation, title={Annotation and Classification of Toxicity for Thai Twitter}, author={Sirihattasak, Sugan and Komachi, Mamoru and Ishikawa, Hiroshi}, year={2019} }
2
133
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - th license: - cc-by-nc-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: ThaiToxicityTweet dataset_info: features: - name: tweet_id dtype: string - name: tweet_text dtype: string - name: toxic_votes dtype: int32 - name: nontoxic_votes dtype: int32 - name: is_toxic dtype: class_label: names: '0': neg '1': pos config_name: thai_toxicity_tweet splits: - name: train num_bytes: 637387 num_examples: 3300 download_size: 194740 dataset_size: 637387 --- # Dataset Card for `thai_toxicity_tweet` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/ - **Repository:** https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/ - **Paper:** https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf - **Leaderboard:** - **Point of Contact:** https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf ### Dataset Summary Thai Toxicity Tweet Corpus contains 3,300 tweets (506 tweets with texts missing) annotated by humans with guidelines including a 44-word dictionary. The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing target, and word sense ambiguity. Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`. Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1). ### Supported Tasks and Leaderboards text classification ### Languages Thai (`th`) ## Dataset Structure ### Data Instances ``` {'is_toxic': 0, 'nontoxic_votes': 3, 'toxic_votes': 0, 'tweet_id': '898576382384418817', 'tweet_text': 'วันๆ นี่คุยกะหมา แมว หมู ไก่ ม้า ควาย มากกว่าคุยกับคนไปละ'} {'is_toxic': 1, 'nontoxic_votes': 0, 'toxic_votes': 3, 'tweet_id': '898573084981985280', 'tweet_text': 'ควายแดงเมิงด่ารัฐบาลจนรองนายกป่วย พวกมึงกำลังทำลายชาติรู้มั้ย มั้ย มั้ย มั้ยยยยยยยยย news.voicetv.co.th/thailand/51672…'} ``` ### Data Fields "tweet_id": Id of tweet on Twitter "tweet_text": text of the tweet "toxic_votes": how many annotators say it is toxic, out of 3 annotators "nontoxic_votes": how many annotators say it is NOT toxic, out of 3 annotators "is_toxic": 1 if tweet is toxic else 0 (majority rules) ### Data Splits No explicit split is given. ## Dataset Creation ### Curation Rationale The dataset is created as part of [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf). ### Source Data #### Initial Data Collection and Normalization The authors used the public Twitter Search API to collect 9,819 tweets from January–December 2017 based on our keyword dictionary. Then, they selected 75 tweets for each keyword. In total, they collected 3,300 tweets for annotation. To ensure quality of data, they set the following selection criteria. 1. All tweets are selected by humans to prevent word ambiguity. (The Twitter API selected the tweets based on characters in the keyword. For example, in the case of “บ้า(crazy),” the API will also select “บ้านนอก” (countryside)” which is not our target.) 2. The length of the tweet should be sufficiently long to discern the context of the tweet. Hence, they set five words as the minimum limit. 3. The tweets that contain only extremely toxic words, (for example: “damn, retard, bitch, f*ck, slut!!!”) are not considered. 4. In addition, they allowed tweets with English words if they were not critical elements in the labeling decision, for example, the word “f*ck.” As a result, our corpus contains English words, but they are less than 2% of the total. All hashtags, re-tweets, and links were removed from these tweets. However, they did not delete emoticons because these emotional icons can imply the real intent of the post owners. Furthermore, only in the case of annotation, some entries such as the names of famous people were replaced with a tag <ไม่ขอเปิดเผยชื่อ>, for anonymity to prevent individual bias. #### Who are the source language producers? Twitter users in Thailand ### Annotations #### Annotation process We manually annotated our dataset with two labels: Toxic and Non-Toxic. We define a message as toxic if it indicates any harmful, damage, or negative intent based on our definition of toxicity. Furthermore, all the tweets were annotated by three annotators to identify toxicity; the conditions used for this identification are presented in the following list. - A toxic message is a message that should be deleted or not be allowed in public. - A message’s target or consequence must exist. It can either be an individual or a generalized group based on a commonality such as religion or ethnicity, or an entire community. - Self-complain is not considered toxic, because it is not harmful to anyone. However, if self-complain is intended to indicate something bad, it will be considered as toxic. - Both direct and indirect messages including those with sarcasm are taken into consideration. We strictly instructed all the annotators about these concepts and asked them to perform a small test to ensure they understood these conditions. The annotation process was divided into two rounds. We asked the candidates to annotate their answers in the first round to learn our annotation standard. Then, we asked them to annotate a different dataset and selected the ones who obtained a full-score for the second round as an annotator. From among these annotators, 20% of the annotators failed the first round and were not involved in the final annotation. #### Who are the annotators? Three annotators hired by [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf) ### Personal and Sensitive Information Despite all tweets being public, due to the nature of toxic tweets, there might be personal attacks and toxic language used. ## Considerations for Using the Data ### Social Impact of Dataset - toxic social media message classification dataset ### Discussion of Biases - Users are masked before annotation by the annotators to prevent biases based on tweet authors ### Other Known Limitations - The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`. ## Additional Information ### Dataset Curators [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf) ### Licensing Information CC-BY-NC 3.0 ### Citation Information Please cite the following if you make use of the dataset: ``` @article{sirihattasak2019annotation, title={Annotation and Classification of Toxicity for Thai Twitter}, author={Sirihattasak, Sugan and Komachi, Mamoru and Ishikawa, Hiroshi}, year={2019} } ``` ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
8,620
[ [ -0.00262451171875, -0.041778564453125, 0.02655029296875, 0.041534423828125, -0.03607177734375, 0.00749969482421875, -0.00909423828125, -0.03863525390625, 0.039398193359375, 0.02459716796875, -0.0293731689453125, -0.077880859375, -0.055419921875, 0.0257415771...
SetFit/toxic_conversations
2022-02-11T13:45:54.000Z
[ "region:us" ]
SetFit
null
null
4
133
2022-03-02T23:29:22
# Toxic Conversation This is a version of the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview). It contains comments from the Civil Comments platform together with annotations if the comment is toxic or not. 10 annotators annotated each example and, as recommended in the task page, set a comment as toxic when target >= 0.5 The dataset is inbalanced, with only about 8% of the comments marked as toxic.
507
[ [ -0.02130126953125, -0.038787841796875, 0.0283050537109375, 0.0216217041015625, -0.032684326171875, 0.0264434814453125, 0.0167083740234375, -0.0219268798828125, 0.026519775390625, 0.051849365234375, -0.056732177734375, -0.0335693359375, -0.052520751953125, -0...
M-CLIP/ImageCaptions-7M-Translations
2022-05-16T21:03:28.000Z
[ "region:us" ]
M-CLIP
null
null
2
133
2022-05-16T21:02:40
Found. Redirecting to https://cdn-lfs.huggingface.co/repos/fd/a8/fda8d7c968a6d27e1390ab6e21a82ccb5e772b75d39fc21bbf9337f5f876a9bf/835f3f7d88a86e05a882c6a6b6333da6ab874776385f85473798769d767c2fca?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1699233861&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY5OTIzMzg2MX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy9mZC9hOC9mZGE4ZDdjOTY4YTZkMjdlMTM5MGFiNmUyMWE4MmNjYjVlNzcyYjc1ZDM5ZmMyMWJiZjkzMzdmNWY4NzZhOWJmLzgzNWYzZjdkODhhODZlMDVhODgyYzZhNmI2MzMzZGE2YWI4NzQ3NzYzODVmODU0NzM3OTg3NjlkNzY3YzJmY2E%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=RPHJaSB8gYBBvv1nKJsYkdK-Hf4Bmbl%7E2vKjZ2SxbS0ha8A-MJP%7E0BXfFcoJ9AJL3gsx01w8ZJEz%7E-CAkchDmKpUSEx7l9TI5-OKEDnrIQpV68baZYKmZktw9fCrubGiM5V9bb6JdvugZyGZyw9kuNdZjB2di6kZOh7cPEpXY9ugwKkvAiy7e4a0uO3gPfPzJrxUnD6XfziaWL1JO7YqOMqMbh-wGi9zciwoh2Qg8tiSAtaIVjCgJD%7EItOX9M7wKC-E5OCIANPEvKiDYtgqsivQGlX74ABN2-a-ikFAp0ZbIDmrZfjhccKWU32nFtRHOXI51YYbqwhC8t-rgQAlZjg__&Key-Pair-Id=KVTP0A1DKRTAX
1,183
[ [ -0.03924560546875, -0.057159423828125, 0.0418701171875, 0.0198211669921875, -0.036865234375, 0.005939483642578125, 0.0142364501953125, -0.01470947265625, 0.06195068359375, 0.0513916015625, -0.08050537109375, -0.057830810546875, -0.03521728515625, 0.037719726...
hadiqa123/en_timit_asr
2022-09-20T15:52:36.000Z
[ "region:us" ]
hadiqa123
null
null
0
133
2022-09-16T21:12:57
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
bigbio/mirna
2022-12-22T15:45:38.000Z
[ "multilinguality:monolingual", "language:en", "license:cc-by-nc-3.0", "region:us" ]
bigbio
The corpus consists of 301 Medline citations. The documents were screened for mentions of miRNA in the abstract text. Gene, disease and miRNA entities were manually annotated. The corpus comprises of two separate files, a train and a test set, coming from 201 and 100 documents respectively.
@Article{Bagewadi2014, author={Bagewadi, Shweta and Bobi{\'{c}}, Tamara and Hofmann-Apitius, Martin and Fluck, Juliane and Klinger, Roman}, title={Detecting miRNA Mentions and Relations in Biomedical Literature}, journal={F1000Research}, year={2014}, month={Aug}, day={28}, publisher={F1000Research}, volume={3}, pages={205-205}, keywords={MicroRNAs; corpus; prediction algorithms}, abstract={ INTRODUCTION: MicroRNAs (miRNAs) have demonstrated their potential as post-transcriptional gene expression regulators, participating in a wide spectrum of regulatory events such as apoptosis, differentiation, and stress response. Apart from the role of miRNAs in normal physiology, their dysregulation is implicated in a vast array of diseases. Dissection of miRNA-related associations are valuable for contemplating their mechanism in diseases, leading to the discovery of novel miRNAs for disease prognosis, diagnosis, and therapy. MOTIVATION: Apart from databases and prediction tools, miRNA-related information is largely available as unstructured text. Manual retrieval of these associations can be labor-intensive due to steadily growing number of publications. Additionally, most of the published miRNA entity recognition methods are keyword based, further subjected to manual inspection for retrieval of relations. Despite the fact that several databases host miRNA-associations derived from text, lower sensitivity and lack of published details for miRNA entity recognition and associated relations identification has motivated the need for developing comprehensive methods that are freely available for the scientific community. Additionally, the lack of a standard corpus for miRNA-relations has caused difficulty in evaluating the available systems. We propose methods to automatically extract mentions of miRNAs, species, genes/proteins, disease, and relations from scientific literature. Our generated corpora, along with dictionaries, and miRNA regular expression are freely available for academic purposes. To our knowledge, these resources are the most comprehensive developed so far. RESULTS: The identification of specific miRNA mentions reaches a recall of 0.94 and precision of 0.93. Extraction of miRNA-disease and miRNA-gene relations lead to an F1 score of up to 0.76. A comparison of the information extracted by our approach to the databases miR2Disease and miRSel for the extraction of Alzheimer's disease related relations shows the capability of our proposed methods in identifying correct relations with improved sensitivity. The published resources and described methods can help the researchers for maximal retrieval of miRNA-relations and generation of miRNA-regulatory networks. AVAILABILITY: The training and test corpora, annotation guidelines, developed dictionaries, and supplementary files are available at http://www.scai.fraunhofer.de/mirna-corpora.html. }, note={26535109[pmid]}, note={PMC4602280[pmcid]}, issn={2046-1402}, url={https://pubmed.ncbi.nlm.nih.gov/26535109}, language={eng} }
1
133
2022-11-13T22:10:00
--- language: - en bigbio_language: - English license: cc-by-nc-3.0 multilinguality: monolingual bigbio_license_shortname: CC_BY_NC_3p0 pretty_name: miRNA homepage: https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION - NAMED_ENTITY_DISAMBIGUATION --- # Dataset Card for miRNA ## Dataset Description - **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html - **Pubmed:** True - **Public:** True - **Tasks:** NER,NED The corpus consists of 301 Medline citations. The documents were screened for mentions of miRNA in the abstract text. Gene, disease and miRNA entities were manually annotated. The corpus comprises of two separate files, a train and a test set, coming from 201 and 100 documents respectively. ## Citation Information ``` @Article{Bagewadi2014, author={Bagewadi, Shweta and Bobi{'{c}}, Tamara and Hofmann-Apitius, Martin and Fluck, Juliane and Klinger, Roman}, title={Detecting miRNA Mentions and Relations in Biomedical Literature}, journal={F1000Research}, year={2014}, month={Aug}, day={28}, publisher={F1000Research}, volume={3}, pages={205-205}, keywords={MicroRNAs; corpus; prediction algorithms}, abstract={ INTRODUCTION: MicroRNAs (miRNAs) have demonstrated their potential as post-transcriptional gene expression regulators, participating in a wide spectrum of regulatory events such as apoptosis, differentiation, and stress response. Apart from the role of miRNAs in normal physiology, their dysregulation is implicated in a vast array of diseases. Dissection of miRNA-related associations are valuable for contemplating their mechanism in diseases, leading to the discovery of novel miRNAs for disease prognosis, diagnosis, and therapy. MOTIVATION: Apart from databases and prediction tools, miRNA-related information is largely available as unstructured text. Manual retrieval of these associations can be labor-intensive due to steadily growing number of publications. Additionally, most of the published miRNA entity recognition methods are keyword based, further subjected to manual inspection for retrieval of relations. Despite the fact that several databases host miRNA-associations derived from text, lower sensitivity and lack of published details for miRNA entity recognition and associated relations identification has motivated the need for developing comprehensive methods that are freely available for the scientific community. Additionally, the lack of a standard corpus for miRNA-relations has caused difficulty in evaluating the available systems. We propose methods to automatically extract mentions of miRNAs, species, genes/proteins, disease, and relations from scientific literature. Our generated corpora, along with dictionaries, and miRNA regular expression are freely available for academic purposes. To our knowledge, these resources are the most comprehensive developed so far. RESULTS: The identification of specific miRNA mentions reaches a recall of 0.94 and precision of 0.93. Extraction of miRNA-disease and miRNA-gene relations lead to an F1 score of up to 0.76. A comparison of the information extracted by our approach to the databases miR2Disease and miRSel for the extraction of Alzheimer's disease related relations shows the capability of our proposed methods in identifying correct relations with improved sensitivity. The published resources and described methods can help the researchers for maximal retrieval of miRNA-relations and generation of miRNA-regulatory networks. AVAILABILITY: The training and test corpora, annotation guidelines, developed dictionaries, and supplementary files are available at http://www.scai.fraunhofer.de/mirna-corpora.html. }, note={26535109[pmid]}, note={PMC4602280[pmcid]}, issn={2046-1402}, url={https://pubmed.ncbi.nlm.nih.gov/26535109}, language={eng} } ```
4,113
[ [ -0.0302886962890625, -0.045654296875, 0.0400390625, -0.0003304481506347656, -0.030120849609375, -0.005832672119140625, -0.008880615234375, -0.039154052734375, 0.06671142578125, 0.0186614990234375, -0.023956298828125, -0.04541015625, -0.0457763671875, 0.02087...
bigbio/tmvar_v1
2022-12-22T15:47:01.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds. It can be used for NER tasks only. The dataset is split into train(334) and test(166) splits
@article{wei2013tmvar, title={tmVar: a text mining approach for extracting sequence variants in biomedical literature}, author={Wei, Chih-Hsuan and Harris, Bethany R and Kao, Hung-Yu and Lu, Zhiyong}, journal={Bioinformatics}, volume={29}, number={11}, pages={1433--1439}, year={2013}, publisher={Oxford University Press} }
0
133
2022-11-13T22:12:28
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: tmVar v1 homepage: https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/ bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION --- # Dataset Card for tmVar v1 ## Dataset Description - **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/ - **Pubmed:** True - **Public:** True - **Tasks:** NER This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds. It can be used for NER tasks only. The dataset is split into train(334) and test(166) splits ## Citation Information ``` @article{wei2013tmvar, title={tmVar: a text mining approach for extracting sequence variants in biomedical literature}, author={Wei, Chih-Hsuan and Harris, Bethany R and Kao, Hung-Yu and Lu, Zhiyong}, journal={Bioinformatics}, volume={29}, number={11}, pages={1433--1439}, year={2013}, publisher={Oxford University Press} } ```
1,062
[ [ -0.00855255126953125, -0.026031494140625, 0.02813720703125, 0.002056121826171875, -0.03656005859375, -0.0027904510498046875, 0.01216888427734375, -0.00945281982421875, 0.0241241455078125, 0.0533447265625, -0.050933837890625, -0.07061767578125, -0.058319091796875...
vocabtrimmer/mc4_validation
2023-03-02T13:33:54.000Z
[ "region:us" ]
vocabtrimmer
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is the processed version of Google's mC4 dataset by AllenAI.
@article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, }
0
133
2023-03-02T09:20:16
# MC4: only validation split This contains the validation set of [mc4](https://huggingface.co/datasets/mc4), to reduce the amount of the files at downloading the validation split of the mc4 data.
196
[ [ -0.05645751953125, -0.011505126953125, 0.005451202392578125, 0.033050537109375, -0.026824951171875, 0.0283660888671875, 0.0289306640625, 0.0043487548828125, 0.02911376953125, 0.06988525390625, -0.08154296875, -0.034698486328125, -0.017669677734375, 0.0250701...
metaeval/race-c
2023-05-31T08:39:38.000Z
[ "task_categories:question-answering", "task_categories:multiple-choice", "language:en", "region:us" ]
metaeval
null
null
0
133
2023-04-06T07:49:42
--- task_categories: - question-answering - multiple-choice language: - en --- Race-C : additional data for race (high school/middle school) but for college level https://github.com/mrcdata/race-c ```bib @InProceedings{pmlr-v101-liang19a, title={A New Multi-choice Reading Comprehension Dataset for Curriculum Learning}, author={Liang, Yichan and Li, Jianheng and Yin, Jian}, booktitle={Proceedings of The Eleventh Asian Conference on Machine Learning}, pages={742--757}, year={2019} } ```
499
[ [ -0.023895263671875, -0.023193359375, 0.0240631103515625, 0.005352020263671875, 0.0031375885009765625, 0.035736083984375, 0.006534576416015625, -0.023895263671875, 0.019317626953125, 0.0180511474609375, -0.052154541015625, -0.050048828125, -0.03369140625, 0.0...
distil-whisper/voxpopuli
2023-09-25T10:30:13.000Z
[ "task_categories:automatic-speech-recognition", "language:en", "license:cc0-1.0", "region:us" ]
distil-whisper
A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.
@inproceedings{wang-etal-2021-voxpopuli, title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation", author = "Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.80", doi = "10.18653/v1/2021.acl-long.80", pages = "993--1003", }
0
133
2023-04-07T17:10:56
--- license: cc0-1.0 task_categories: - automatic-speech-recognition language: - en -pretty_name: VoxPopuli --- # Distil Whisper: VoxPopuli This is a variant of the [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/facebook/voxpopuli). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/voxpopuli", "en") # take the first sample of the validation set sample = dataset["validation"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/voxpopuli", "en", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under cc0-1.0.
2,005
[ [ -0.0108184814453125, -0.05517578125, 0.00986480712890625, 0.0294189453125, -0.0099945068359375, 0.00368499755859375, -0.01397705078125, -0.009490966796875, 0.031768798828125, 0.0273590087890625, -0.059234619140625, -0.036407470703125, -0.040557861328125, 0.0...
distil-whisper/spgispeech
2023-09-25T10:28:52.000Z
[ "task_categories:automatic-speech-recognition", "language:en", "license:other", "region:us" ]
distil-whisper
The SPGISpeech corpus is derived from company earnings calls manually transcribed by S&P Global, Inc. according to a pro- fessional style guide detailing conventions for capitalization, punctuation, denormalization of non-standard words and tran- scription of disfluencies in spontaneous speech. The basic unit of SPGISpeech is a pair consisting of a 5 to 15 second long 16 bit, 16kHz mono wav audio file and its transcription..
@ARTICLE{2021arXiv210402014O, author = {{O'Neill}, Patrick K. and {Lavrukhin}, Vitaly and {Majumdar}, Somshubra and {Noroozi}, Vahid and {Zhang}, Yuekai and {Kuchaiev}, Oleksii and {Balam}, Jagadeesh and {Dovzhenko}, Yuliya and {Freyberg}, Keenan and {Shulman}, Michael D. and {Ginsburg}, Boris and {Watanabe}, Shinji and {Kucsko}, Georg}, title = "{SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language, Electrical Engineering and Systems Science - Audio and Speech Processing}, year = 2021, month = apr, eid = {arXiv:2104.02014}, pages = {arXiv:2104.02014}, archivePrefix = {arXiv}, eprint = {2104.02014}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210402014O}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} }
0
133
2023-04-07T21:11:05
--- license: other task_categories: - automatic-speech-recognition language: - en extra_gated_prompt: |- Your access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them. If you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT. Section 1 – THE CONTENT 1.1 The Content is provided for academic research purposes and internal use only and must not be used to: assemble or create a database; construct or facilitate the construction of products which compete with the Content; identify or attempt to identify or contact any individual; or link to another dataset. The Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content. 1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content. The Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content. 1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho. 1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction. 1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete. 1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice. Section 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY 2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES. 2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY. 2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law. Section 3 - PRIVACY 3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.). 3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below). 3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes. 3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content. 3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information. 3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent. 3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@kensho.com or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041. Section 4 - MISCELLANEOUS 4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof. 4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party. 4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York. 4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE. 4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail. extra_gated_fields: Full name: text Email: text Institution: text I accept the Terms of Usage: checkbox --- # Distil Whisper: SPGISpeech This is a variant of the [SPGISpeech](https://huggingface.co/datasets/kensho/spgispeech) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/kensho/spgispeech). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/spgispeech", "L") # take the first sample of the validation set sample = dataset["validation"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/spgispeech", "L", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original [dataset card](https://huggingface.co/datasets/kensho/spgispeech).
17,467
[ [ -0.0167694091796875, -0.044921875, 0.01265716552734375, 0.0343017578125, -0.0176544189453125, 0.002315521240234375, -0.0201873779296875, -0.01165008544921875, 0.042205810546875, 0.0276336669921875, -0.06463623046875, -0.032501220703125, -0.055572509765625, 0...
hltcoe/megawika
2023-10-03T17:24:24.000Z
[ "task_categories:summarization", "task_categories:question-answering", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:10M<n<100M", "language:af", "language:ar", "language:az", "language:bn", "language:cs", "language:de", "language:en", "language:e...
hltcoe
MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span 50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a non-English language, an automated English translation is provided. Furthermore, nearly 130 million English question/answer pairs were extracted from the passages, and FrameNet events occurring in the passages are detected using the LOME FrameNet parser.
@article{barham2023megawika, title={MegaWika: Millions of reports and their sources across 50 diverse languages}, author={Barham, Samuel and Weller, Orion and Yuan, Michelle and Murray, Kenton and Yarmohammadi, Mahsa and Jiang, Zhengping and Vashishtha, Siddharth and Martin, Alexander and Liu, Anqi and White, Aaron Steven and Boyd-Graber, Jordan and Van Durme, Benjamin }, journal={INSERT ARXIV PREPRINT ID HERE}, year={2023} }
22
133
2023-05-17T02:07:50
--- license: cc-by-sa-4.0 task_categories: - summarization - question-answering - text-generation - text2text-generation language: - af - ar - az - bn - cs - de - en - es - et - fa - fi - fr - ga - gl - gu - he - hi - hr - id - it - ja - ka - kk - km - ko - lt - lv - mk - ml - mn - mr - my - ne - nl - pl - ps - pt - ro - ru - si - sl - sv - ta - th - tr - uk - ur - vi - xh - zh pretty_name: MegaWika size_categories: - 10M<n<100M --- # Dataset Card for MegaWika ## Dataset Description - **Homepage:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika) - **Repository:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika) - **Paper:** [Coming soon] - **Leaderboard:** [Coming soon] - **Point of Contact:** [Samuel Barham](samuel.barham@jhuapl.edu) ### Dataset Summary MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span 50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a non-English language, an automated English translation is provided. Furthermore, nearly 130 million English question/answer pairs were extracted from the passages, and FrameNet events occurring in the passages are detected using the [LOME](https://aclanthology.org/2021.eacl-demos.19.pdf) FrameNet parser. <!--- To get a feel for the dataset -- its structure, content, strengths and weaknesses -- you may visit the [dataset viewer](https://huggingface.co/spaces/hltcoe/megawika) we have set up as a HuggingFace Space. It allows the curious visitor to explore a small set of examples spread across a number of the dataset's constituent languages. --> ### Dataset Creation The pipeline through which MegaWika was created is complex, and is described in more detail in the paper (linked above), but the following diagram illustrates the basic approach. ![Illustration of MegaWikaProcess](images/MegaWikaProcess-cross-lingual.drawio.png) ### Supported Tasks and Leaderboards MegaWika is meant to support research across a variety of tasks, including report generation, summarization, information retrieval, question answering, etc. ### Languages MegaWika is divided by Wikipedia language. There are 50 languages, including English, each designated by their 2-character ISO language code: - `af`: Afrikaans - `ar`: Arabic - `az`: Azeri (Azerbaijani) - `bn`: Bengali - `cs`: Czech - `de`: German (Deutsch) - `en`: English - `es`: Spanish (Español) - `et`: Estonian - `fa`: Farsi (Persian) - `fi`: Finnish - `fr`: French - `ga`: Irish (Gaelic) - `gl`: Galician - `gu`: Gujarati - `he`: Hebrew - `hi`: Hindi - `hr`: Hungarian - `id`: Indonesian - `it`: Italian - `ja`: Japanese - `ka`: Georgian (Kartvelian/Kartlian) - `kk`: Kazakh - `km`: Khmer - `ko`: Korean - `lt`: Lithuanian - `lv`: Latvian - `mk`: Macedonian (Makedonski) - `ml`: Malay (Malayalam) - `mn`: Mongolian - `mr`: Marathi - `my`: Burmese (Myanmar language) - `ne`: Nepali - `nl`: Dutch (Nederlands) - `pl`: Polish - `ps`: Pashto - `pt`: Portuguese - `ro`: Romanian - `ru`: Russian - `si`: Sinhalese (Sri Lankan language) - `sl`: Slovenian - `sv`: Swedish (Svenska) - `ta`: Tamil - `th`: Thai - `tr`: Turkish - `uk`: Ukrainian - `ur`: Urdu - `vi`: Vietnamese - `xh`: Xhosa - `zh`: Chinese (Zhōng wén) ## Dataset Structure The dataset is divided by language, and the data for each of the 50 languages is further chunked into discrete JSON lines files. Each line of these files -- we'll call such a line an **instance** -- contains the data extracted from a single Wikipedia article. ### Data Instances Each instance contains the text of the seed Wikipedia article, along with a list of **entries**. Each entry consists basically in an extracted Wikipedia passage, the URL and scraped text of the web source it cites, a list of questions/answer pairs extracted from the passage, and a framenet parse of the passage. Where the passage is from a non-English Wikipedia, a machine translation into English is also provided. ### Data Fields The detailed structure of an instance is as follows: ``` { "article_title": <string : title of original Wikipedia article> "article_text": <string : text of Wikipedia article> "entries": [ # Wiki Passage "id": <string : passage ID> "passage": { "text": <string : text of passage in English (possibly via MT)> "parse": <list of dict : FrameNet parse of English passage text> "en_tokens": <dict : tokenization of passage in English> "lang_tokens": <dict : tokenization of original non-English passage> "en_lang_token_map": <dict : alignment mapping between English and original language token indices> } # MT "original": <string : original language passage> "original_sents": <list of string : sentencized original language passage> "translation": <string : machine translation of passage> "translation_sents": <list of string : sentencized machine translation of passage> "translation_probs": <list of float : log prob of machine translation by sentence, where available> "repetitious_translation": <string \in ("true", "false") : automated judgment on whether machine translation is pathologically repetitious> "source_lang": <string : language ID, 2-character ISO code> # Source "source_url": <string : URL of the cited web source> "source_text": <string : content extracted from the scrape of the source URL> # Question/Answer Pairs "qa_pairs": [ ... { "question": <string : generated question> "passage_id": <string : passage ID> "en_answer": <string : English answer> "lang_answer": <string : aligned original language answer> "frames": [ ... { "frame": <string : frame triggered by the question> "argument": <string : detected frame arguments> } ... ] # NB: answer matches can be empty, in the case no matching span exists "en_matches_in_source": <list of int : start and end index of the English language-answer token(s) in the source document> "en_match_in_passage": <list of int : start and end index of the English language-answer token(s) in the English language translation of the passage> "lang_matches_in_source": <list of int : start and end index of the original language-answer token(s) in the source document> "lang_match_in_passage": <list of int : start and end index of the original language-answer token(s) in the original language passage> "passage": <list of string : sentencized view of the passage> "en_answer_tokens": <list of string> "match_disambiguated_question": <string : disambiguated version of question obtained by matching pronouns with article title (noisy but often helpful)> } ... ] ] } ``` English language instances differ not in structure but in content; 1. Fields in the block labeled "MT" above are naturally null (that is, they are set to falsy values in Python -- specifically `None`) 2. Since the Wiki passage only exists in English, and has no corresponding non-English "original language" version, answer spans also necessarily have only an English-language version (and no non-English "original-language" version. Therefore, fields in the `qa_pairs` block beginning with `lang_` are set to null/falsy values in Python (in this case, empty lists). ### Data Splits MegaWika is currently split only by language, as each task will imply its own approach to filtering, sampling, downselecting, and splitting into train/test splits. <!--- ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] --> ## Licensing and Takedown MegaWika 1.0 consists in part of documents scraped from across the web (based on citations linked in Wikipedia articles.) We do not own any of the scraped text nor do we claim copyright: text drawn from Wikipedia citations are meant for research use in algorithmic design and model training. We release this dataset and all its contents under CC-BY-SA-4.0. ### Notice and Takedown Policy: *NB*: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: - Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. - Clearly identify the copyrighted work claimed to be infringed. - Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. And contact the authors. *Take down*: We will comply to legitimate requests by removing the affected sources from the next release of the dataset. ## Additional Information ### Dataset Curators Released and maintained by the Johns Hopkins University Human Language Technology Center of Excellence (JHU/HLTCOE). You can contact one the MegaWika authors, including [Samuel Barham](mailto:samuel.barham@jhuapl.edu), [Orion Weller](mailto:oweller2@jhu.edu), and [Ben van Durme](mailto:vandurme@jhu.edu) with questions. ### Licensing Information Released under the [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license. ### Citation Information ``` @misc{barham2023megawika, title={MegaWika: Millions of reports and their sources across 50 diverse languages}, author={Samuel Barham and and Weller and Michelle Yuan and Kenton Murray and Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and Benjamin Van Durme}, year={2023}, eprint={2307.07049}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ### Contributions [More Information Needed] -->
10,431
[ [ -0.044464111328125, -0.058929443359375, 0.0177001953125, 0.01059722900390625, -0.01490020751953125, -0.0163116455078125, -0.0262908935546875, -0.033416748046875, 0.04608154296875, 0.033203125, -0.049774169921875, -0.036651611328125, -0.036041259765625, 0.047...
bbz662bbz/databricks-dolly-15k-ja-gozaru
2023-05-29T12:58:37.000Z
[ "license:cc-by-sa-3.0", "region:us" ]
bbz662bbz
null
null
1
133
2023-05-28T00:51:18
--- license: cc-by-sa-3.0 --- This dataset was using "kunishou/databricks-dolly-15k-ja" This dataset is licensed under CC BY SA 3.0 Last Update : 2023-05-28 databricks-dolly-15k-ja-gozaru kunishou/databricks-dolly-15k-ja https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
290
[ [ -0.00859832763671875, -0.01788330078125, 0.0119781494140625, 0.05682373046875, -0.032257080078125, -0.01471710205078125, 0.021392822265625, -0.009765625, 0.036407470703125, 0.05572509765625, -0.0721435546875, -0.0243377685546875, -0.0277252197265625, 0.01335...
gonced8/multi-session_chat
2023-08-25T10:59:38.000Z
[ "task_categories:conversational", "size_categories:100K<n<1M", "language:en", "license:gpl-3.0", "region:us" ]
gonced8
null
null
1
133
2023-08-25T10:56:33
--- license: gpl-3.0 task_categories: - conversational language: - en pretty_name: Multi-Session Chat size_categories: - 100K<n<1M --- Not my dataset, I only cleaned the dataset from [ParlAI - Msc](https://parl.ai/projects/msc/).
230
[ [ -0.0207672119140625, -0.0253448486328125, 0.013458251953125, -0.0091552734375, -0.00820159912109375, 0.0194854736328125, 0.01194000244140625, 0.0191497802734375, 0.038299560546875, 0.06298828125, -0.04364013671875, -0.0535888671875, -0.022491455078125, 0.006...
BrunoHays/multilingual-TEDX-fr
2023-10-23T09:41:59.000Z
[ "task_categories:automatic-speech-recognition", "size_categories:100K<n<1M", "language:fr", "license:cc-by-nc-nd-4.0", "region:us" ]
BrunoHays
French subpart of the multilingual TEDX dataset
@inproceedings{salesky2021mtedx, title={Multilingual TEDx Corpus for Speech Recognition and Translation}, author={Elizabeth Salesky and Matthew Wiesner and Jacob Bremerman and Roldano Cattoni and Matteo Negri and Marco Turchi and Douglas W. Oard and Matt Post}, booktitle={Proceedings of Interspeech}, year={2021}, }
0
133
2023-10-02T09:39:41
--- license: cc-by-nc-nd-4.0 task_categories: - automatic-speech-recognition language: - fr size_categories: - 100K<n<1M --- The french subset of the dataset [Multilingual TEDx](https://www.openslr.org/100). The data uploaded to HF corresponds to the directory fr-fr. The audio files are automatically resampled to 16 kHz. #### Configs: - single_samples (default): all samples taken separately - max=30s: combine consecutive samples for a period shorter than 30 seconds - max=10s: combine consecutive samples for a period shorter than 10 seconds - max: combine all the samples of a TEDx talk #### dependencies (only needed for much faster audio decoding): - ffmpeg: apt install ffmpeg - ffmpeg-python: pip install ffmpeg-python #### Sample ``` {'file': '0u7tTptBo9I-0', 'audio': {'path': None, 'array': array([ 3.05175781e-05, 6.10351562e-05, 9.15527344e-05, ..., -2.44140625e-04, -3.35693359e-04, -2.74658203e-04]), 'sampling_rate': 16000}, 'sentence': "Bonsoir ! Notre planète est recouverte à 70 % d'océan, et pourtant, étrangement, on a choisi de l'appeler « la Terre ». Le poète Heathcote Williams a une vision bien plus objective et moins anthropocentrique, quand il dit que « Vue de l'espace, la planète est bleue. Vue de l'espace, elle est le territoire, non pas des hommes, mais des baleines ». Et pourtant, on vient tous de l'océan. ", 'speaker_id': '0u7tTptBo9I', 'start_timestamp': 17.25, 'end_timestamp': 45.26, 'index': 0} ``` ``` @inproceedings{salesky2021mtedx, title={Multilingual TEDx Corpus for Speech Recognition and Translation}, author={Elizabeth Salesky and Matthew Wiesner and Jacob Bremerman and Roldano Cattoni and Matteo Negri and Marco Turchi and Douglas W. Oard and Matt Post}, booktitle={Proceedings of Interspeech}, year={2021}, } ```
1,789
[ [ -0.037384033203125, -0.0452880859375, 0.03546142578125, 0.02130126953125, -0.024627685546875, 0.012939453125, -0.040618896484375, -0.0163116455078125, 0.030242919921875, 0.039031982421875, -0.061676025390625, -0.0474853515625, -0.03485107421875, 0.0259399414...
euclaise/gsm8k_self_correct
2023-10-19T20:46:04.000Z
[ "size_categories:1K<n<10K", "license:mit", "cot", "self-correct", "region:us" ]
euclaise
null
null
1
133
2023-10-05T20:15:09
--- license: mit size_categories: - 1K<n<10K dataset_info: features: - name: question dtype: string - name: answer dtype: string - name: mistake dtype: string - name: correct_end dtype: string splits: - name: train num_bytes: 4561402 num_examples: 4676 download_size: 2528831 dataset_size: 4561402 configs: - config_name: default data_files: - split: train path: data/train-* tags: - cot - self-correct --- # Dataset Card for "gsm8k_self_correct" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
629
[ [ -0.0207672119140625, -0.00098419189453125, 0.011322021484375, 0.0258941650390625, -0.00624847412109375, -0.004329681396484375, 0.0177764892578125, -0.005035400390625, 0.054534912109375, 0.04022216796875, -0.046661376953125, -0.052459716796875, -0.0333251953125, ...
open-phi/rag-textbook-instruct-full
2023-10-11T04:57:32.000Z
[ "region:us" ]
open-phi
null
null
5
133
2023-10-10T18:53:45
--- dataset_info: features: - name: formatted_prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 117082216 num_examples: 8340 download_size: 44011549 dataset_size: 117082216 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "rag-textbook-instruct-full" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
510
[ [ -0.041473388671875, -0.01061248779296875, 0.0170135498046875, 0.004306793212890625, -0.0203399658203125, -0.0025424957275390625, 0.00616455078125, -0.00008684396743774414, 0.047271728515625, 0.042633056640625, -0.042266845703125, -0.052459716796875, -0.035491943...
haseong8012/child-50k
2023-10-19T12:27:12.000Z
[ "region:us" ]
haseong8012
null
null
0
133
2023-10-19T11:27:30
--- dataset_info: features: - name: text dtype: string - name: audio sequence: float32 splits: - name: train num_bytes: 9937227708 num_examples: 50000 download_size: 8732585023 dataset_size: 9937227708 --- # Dataset Card for "korean-child-command-voice_train-0-50000_smaplingRate-160002" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
450
[ [ -0.03131103515625, 0.0060882568359375, -0.004535675048828125, 0.03656005859375, -0.0224609375, 0.01160430908203125, 0.004718780517578125, 0.00847625732421875, 0.038177490234375, 0.03314208984375, -0.08624267578125, -0.0289764404296875, -0.046630859375, -0.03...
newsph
2022-11-03T16:07:51.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:fil",...
null
Large-scale dataset of Filipino news articles. Sourced for the NewsPH-NLI Project (Cruz et al., 2020).
@article{cruz2020investigating, title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation}, author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng}, journal={arXiv preprint arXiv:2010.11574}, year={2020} }
2
132
2022-03-02T23:29:22
--- annotations_creators: - no-annotation language_creators: - found language: - fil - tl license: - gpl-3.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: newsph-nli pretty_name: NewsPH-NLI dataset_info: features: - name: text dtype: string config_name: newsph splits: - name: train num_bytes: 298833914 num_examples: 2190465 download_size: 104086466 dataset_size: 298833914 --- # Dataset Card for NewsPH ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Filipino Text Benchmarks](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks) - **Repository:** - **Paper:** [Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation](https://arxiv.org/abs/2010.11574) - **Leaderboard:** - **Point of Contact:** [Jan Christian Blaise Cruz](jan_christian_cruz@dlsu.edu.ph) ### Dataset Summary Raw collection of news articles in Filipino. Used to produce the NewsPH-NLI dataset in Cruz et al. (2020) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Tagalog/Filipino ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `text` (`str`) The dataset is in plaintext and only has one field ("text"). It can be used for language modeling. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@jcblaisecruz02](https://github.com/jcblaisecruz02) for adding this dataset.
3,463
[ [ -0.034759521484375, -0.030792236328125, -0.0028743743896484375, 0.03387451171875, -0.02398681640625, 0.0087432861328125, -0.0285491943359375, -0.02447509765625, 0.03466796875, 0.048065185546875, -0.0589599609375, -0.06427001953125, -0.044708251953125, 0.0247...
offcombr
2023-01-25T14:41:55.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pt", "license:unknown", "hate-speech-detection", "region:us" ]
null
OffComBR: an annotated dataset containing for hate speech detection in Portuguese composed of news comments on the Brazilian Web.
@article{Pelle2017, title={Offensive Comments in the Brazilian Web: a dataset and baseline results}, author={Rogers P. de Pelle and Viviane P. Moreira}, booktitle={6th Brazilian Workshop on Social Network Analysis and Mining (BraSNAM)}, year={2017}, }
4
132
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - pt license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: [] paperswithcode_id: offcombr pretty_name: Offensive Comments in the Brazilian Web tags: - hate-speech-detection dataset_info: - config_name: offcombr-2 features: - name: label dtype: class_label: names: '0': 'no' '1': 'yes' - name: text dtype: string splits: - name: train num_bytes: 105703 num_examples: 1250 download_size: 99956 dataset_size: 105703 - config_name: offcombr-3 features: - name: label dtype: class_label: names: '0': 'no' '1': 'yes' - name: text dtype: string splits: - name: train num_bytes: 90094 num_examples: 1033 download_size: 85215 dataset_size: 90094 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://www.inf.ufrgs.br/~rppelle/hatedetector/ - **Repository:** https://github.com/rogersdepelle/OffComBR - **Paper:** https://sol.sbc.org.br/index.php/brasnam/article/view/3260/3222 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary OffComBR: an annotated dataset containing for hate speech detection in Portuguese composed of news comments on the Brazilian Web. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
3,642
[ [ -0.04229736328125, -0.041259765625, -0.00030922889709472656, 0.0156402587890625, -0.011138916015625, 0.0156707763671875, -0.021484375, -0.0272979736328125, 0.03521728515625, 0.04046630859375, -0.0552978515625, -0.07525634765625, -0.06243896484375, -0.0002090...
ollie
2023-06-01T14:59:47.000Z
[ "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10M<n<100M", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:other", "relation-extraction", "text-to-structured", "region:us" ]
null
The Ollie dataset includes two configs for the data used to train the Ollie informatation extraction algorithm, for 18M sentences and 3M sentences respectively. This data is for academic use only. From the authors: Ollie is a program that automatically identifies and extracts binary relationships from English sentences. Ollie is designed for Web-scale information extraction, where target relations are not specified in advance. Ollie is our second-generation information extraction system . Whereas ReVerb operates on flat sequences of tokens, Ollie works with the tree-like (graph with only small cycles) representation using Stanford's compression of the dependencies. This allows Ollie to capture expression that ReVerb misses, such as long-range relations. Ollie also captures context that modifies a binary relation. Presently Ollie handles attribution (He said/she believes) and enabling conditions (if X then). More information is available at the Ollie homepage: https://knowitall.github.io/ollie/
@inproceedings{ollie-emnlp12, author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni}, title = {Open Language Learning for Information Extraction}, booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)}, year = {2012} }
0
132
2022-03-02T23:29:22
--- annotations_creators: - machine-generated language_creators: - crowdsourced language: - en license: - other multilinguality: - monolingual size_categories: - 10M<n<100M - 1M<n<10M source_datasets: - original task_categories: [] task_ids: [] pretty_name: Ollie tags: - relation-extraction - text-to-structured dataset_info: - config_name: ollie_lemmagrep features: - name: arg1 dtype: string - name: arg2 dtype: string - name: rel dtype: string - name: search_query dtype: string - name: sentence dtype: string - name: words dtype: string - name: pos dtype: string - name: chunk dtype: string - name: sentence_cnt dtype: string splits: - name: train num_bytes: 12324648919 num_examples: 18674630 download_size: 1789363108 dataset_size: 12324648919 - config_name: ollie_patterned features: - name: rel dtype: string - name: arg1 dtype: string - name: arg2 dtype: string - name: slot0 dtype: string - name: search_query dtype: string - name: pattern dtype: string - name: sentence dtype: string - name: parse dtype: string splits: - name: train num_bytes: 2930309084 num_examples: 3048961 download_size: 387514061 dataset_size: 2930309084 config_names: - ollie_lemmagrep - ollie_patterned --- # Dataset Card for Ollie ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Ollie](https://knowitall.github.io/ollie/) - **Repository:** [Github](https://github.com/knowitall/ollie) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/D12-1048/) ### Dataset Summary The Ollie dataset includes two configs for the data used to train the Ollie informatation extraction algorithm, for 18M sentences and 3M sentences respectively. This data is for academic use only. From the authors: Ollie is a program that automatically identifies and extracts binary relationships from English sentences. Ollie is designed for Web-scale information extraction, where target relations are not specified in advance. Ollie is our second-generation information extraction system . Whereas ReVerb operates on flat sequences of tokens, Ollie works with the tree-like (graph with only small cycles) representation using Stanford's compression of the dependencies. This allows Ollie to capture expression that ReVerb misses, such as long-range relations. Ollie also captures context that modifies a binary relation. Presently Ollie handles attribution (He said/she believes) and enabling conditions (if X then). More information is available at the Ollie homepage: https://knowitall.github.io/ollie/ ### Supported Tasks and Leaderboards [More Information Needed] ### Languages en ## Dataset Structure ### Data Instances There are two configurations for the dataset: ollie_lemmagrep which are 18M sentences from web searches for a subset of the Reverb relationships (110,000 relationships), and the 3M sentences for ollie_patterned which is a subset of the ollie_lemmagrep dataset derived from patterns according to the Ollie paper. An example of an ollie_lemmagrep record: `` {'arg1': 'adobe reader', 'arg2': 'pdf', 'chunk': 'B-NP I-NP I-NP I-NP B-PP B-NP I-NP B-VP B-PP B-NP I-NP O B-VP B-NP I-NP I-NP I-NP B-VP I-VP I-VP O', 'pos': 'JJ NNS CC NNS IN PRP$ NN VBP IN NNP NN CC VB DT NNP NNP NNP TO VB VBN .', 'rel': 'be require to view', 'search_query': 'require reader pdf adobe view', 'sentence': 'Many documents and reports on our site are in PDF format and require the Adobe Acrobat Reader to be viewed .', 'sentence_cnt': '9', 'words': 'many,document,and,report,on,our,site,be,in,pdf,format,and,require,the,adobe,acrobat,reader,to,be,view'} `` An example of an ollie_patterned record: `` {'arg1': 'english', 'arg2': 'internet', 'parse': '(in_IN_6), advmod(important_JJ_4, most_RBS_3); nsubj(language_NN_5, English_NNP_0); cop(language_NN_5, being_VBG_1); det(language_NN_5, the_DT_2); amod(language_NN_5, important_JJ_4); prep_in(language_NN_5, era_NN_9); punct(language_NN_5, ,_,_10); conj(language_NN_5, education_NN_12); det(era_NN_9, the_DT_7); nn(era_NN_9, Internet_NNP_8); amod(education_NN_12, English_JJ_11); nsubjpass(enriched_VBN_15, language_NN_5); aux(enriched_VBN_15, should_MD_13); auxpass(enriched_VBN_15, be_VB_14); punct(enriched_VBN_15, ._._16)', 'pattern': '{arg1} <nsubj< {rel:NN} >prep_in> {slot0:NN} >nn> {arg2}', 'rel': 'be language of', 'search_query': 'english language internet', 'sentence': 'English being the most important language in the Internet era , English education should be enriched .', 'slot0': 'era'} `` ### Data Fields For ollie_lemmagrep: * rel: the relationship phrase/verb phrase. This may be empty, which represents the "be" relationship. * arg1: the first argument in the relationship * arg2: the second argument in the relationship. * chunk: a tag of each token in the sentence, showing the pos chunks * pos: part of speech tagging of the sentence * sentence: the sentence * sentence_cnt: the number of copies of this sentence encountered * search_query: a combintion of rel, arg1, arg2 * words: the lemma of the words of the sentence separated by commas For ollie_patterned: * rel: the relationship phrase/verb phrase. * arg1: the first argument in the relationship * arg2: the second argument in the relationship. * slot0: the third argument in the relationship, which might be empty. * pattern: a parse pattern for the relationship * parse: a dependency parse forthe sentence * search_query: a combintion of rel, arg1, arg2 * sentence: the senence ### Data Splits There are no splits. ## Dataset Creation ### Curation Rationale This dataset was created as part of research on open information extraction. ### Source Data #### Initial Data Collection and Normalization See the research paper on OLlie. The training data is extracted from web pages (Cluebweb09). #### Who are the source language producers? The Ollie authors at the Univeristy of Washington and data from Cluebweb09 and the open web. ### Annotations #### Annotation process The various parsers and code from the Ollie alogrithm. #### Who are the annotators? Machine annotated. ### Personal and Sensitive Information Unkown, but likely there are names of famous individuals. ## Considerations for Using the Data ### Social Impact of Dataset The goal for the work is to help machines learn to extract information form open domains. ### Discussion of Biases Since the data is gathered from the web, there is likely to be biased text and relationships. [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The authors of Ollie at The University of Washington ### Licensing Information The University of Washington academic license: https://raw.githubusercontent.com/knowitall/ollie/master/LICENSE ### Citation Information ``` @inproceedings{ollie-emnlp12, author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni}, title = {Open Language Learning for Information Extraction}, booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)}, year = {2012} } ``` ### Contributions Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset.
8,399
[ [ -0.0136871337890625, -0.054107666015625, 0.004367828369140625, 0.0218353271484375, -0.01032257080078125, -0.0067901611328125, -0.007404327392578125, -0.033233642578125, 0.039520263671875, 0.0221405029296875, -0.043182373046875, -0.047576904296875, -0.03890991210...
poleval2019_cyberbullying
2023-01-25T14:42:46.000Z
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:unknown", "region:us" ]
null
In Task 6-1, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets that contain any kind of harmful information (class: 1). This includes cyberbullying, hate speech and related phenomena. In Task 6-2, the participants shall distinguish between three classes of tweets: 0 (non-harmful), 1 (cyberbullying), 2 (hate-speech). There are various definitions of both cyberbullying and hate-speech, some of them even putting those two phenomena in the same group. The specific conditions on which we based our annotations for both cyberbullying and hate-speech, which have been worked out during ten years of research will be summarized in an introductory paper for the task, however, the main and definitive condition to 1 distinguish the two is whether the harmful action is addressed towards a private person(s) (cyberbullying), or a public person/entity/large group (hate-speech).
@proceedings{ogr:kob:19:poleval, editor = {Maciej Ogrodniczuk and Łukasz Kobyliński}, title = {{Proceedings of the PolEval 2019 Workshop}}, year = {2019}, address = {Warsaw, Poland}, publisher = {Institute of Computer Science, Polish Academy of Sciences}, url = {http://2019.poleval.pl/files/poleval2019.pdf}, isbn = "978-83-63159-28-3"} }
1
132
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - pl license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - intent-classification pretty_name: Poleval 2019 cyberbullying dataset_info: - config_name: task01 features: - name: text dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' splits: - name: train num_bytes: 1104322 num_examples: 10041 - name: test num_bytes: 109681 num_examples: 1000 download_size: 410001 dataset_size: 1214003 - config_name: task02 features: - name: text dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' '2': '2' splits: - name: train num_bytes: 1104322 num_examples: 10041 - name: test num_bytes: 109681 num_examples: 1000 download_size: 410147 dataset_size: 1214003 --- # Dataset Card for Poleval 2019 cyberbullying ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://2019.poleval.pl/index.php/tasks/task6 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Task 6-1: Harmful vs non-harmful In this task, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets that contain any kind of harmful information (class: 1). This includes cyberbullying, hate speech and related phenomena. The data for the task is available now and can be downloaded from the link provided below. Task 6-2: Type of harmfulness In this task, the participants shall distinguish between three classes of tweets: 0 (non-harmful), 1 (cyberbullying), 2 (hate-speech). There are various definitions of both cyberbullying and hate-speech, some of them even putting those two phenomena in the same group. The specific conditions on which we based our annotations for both cyberbullying and hate-speech, which have been worked out during ten years of research will be summarized in an introductory paper for the task, however, the main and definitive condition to distinguish the two is whether the harmful action is addressed towards a private person(s) (cyberbullying), or a public person/entity/large group (hate-speech). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - text: the provided tweet - label: for task 6-1 the label can be 0 (non-harmful) or 1 (harmful) for task 6-2 the label can be 0 (non-harmful), 1 (cyberbullying) or 2 (hate-speech) ### Data Splits Train and Test ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @proceedings{ogr:kob:19:poleval, editor = {Maciej Ogrodniczuk and Łukasz Kobyliński}, title = {{Proceedings of the PolEval 2019 Workshop}}, year = {2019}, address = {Warsaw, Poland}, publisher = {Institute of Computer Science, Polish Academy of Sciences}, url = {http://2019.poleval.pl/files/poleval2019.pdf}, isbn = "978-83-63159-28-3"} } ``` ### Contributions Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
5,046
[ [ -0.028717041015625, -0.07464599609375, 0.00782012939453125, 0.02484130859375, -0.03314208984375, 0.0213775634765625, -0.0110321044921875, -0.042999267578125, 0.033416748046875, 0.028472900390625, -0.04248046875, -0.0699462890625, -0.058807373046875, -0.00008...
BeIR/beir-corpus
2022-10-21T15:30:07.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
3
132
2022-03-02T23:29:22
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
13,988
[ [ -0.0396728515625, -0.03985595703125, 0.010955810546875, 0.003665924072265625, 0.004230499267578125, 0.00008660554885864258, -0.0081939697265625, -0.018890380859375, 0.0216827392578125, 0.005954742431640625, -0.034332275390625, -0.0545654296875, -0.02638244628906...
mbazaNLP/kinyarwanda-tts-dataset
2023-06-27T08:09:28.000Z
[ "language_creators:Digital Umuganda", "size_categories:3K<n<4K", "size_categories:~6hours", "language:rw", "license:cc-by-4.0", "region:us" ]
mbazaNLP
null
null
1
132
2022-05-27T08:20:36
--- language: - rw language_creators: - "Digital Umuganda" license: - cc-by-4.0 size_categories: - 3K<n<4K - ~6hours --- # Kinyarwanda TTS dataset The dataset consists of 3992 clips of Kinyarwanda TTS corpus recorded in a studio using a voice actress, it was collected in the mbaza project ## Data structure ``` Audio: 3992 Single voice studio recordings by a voice actress Text: CSV with audio name and corresponding written text ``` ## Language The corresponding dataset is in the Kinyarwanda Language ## Dataset Creation - Text collected had to include Kinyarwanda syllabes, which is made by a combination of a consonant or a group of consonats (e.g. Nyw) and a vowel. - Text were reviewed by a linguist to ensure the text fit kinyarwanda standards - The voice were recorded in a studio albeit in a semi-professional settings (i.e. some of the audio contains reverbs)
890
[ [ -0.0269775390625, -0.032257080078125, -0.01629638671875, 0.0010738372802734375, -0.006343841552734375, 0.0157470703125, -0.0003399848937988281, -0.01312255859375, 0.04730224609375, 0.0535888671875, -0.048431396484375, -0.044769287109375, -0.043487548828125, ...
asapp/slue
2022-09-26T23:08:10.000Z
[ "task_categories:automatic-speech-recognition", "task_categories:audio-classification", "task_categories:text-classification", "task_categories:token-classification", "task_ids:sentiment-analysis", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found",...
asapp
Spoken Language Understanding Evaluation (SLUE) benchmark. There are two subsets: (i) SLUE-VoxPopuli which has ASR and NER tasks and (ii) SLUE-VoxCeleb which has ASR and SA tasks.
@inproceedings{shon2022slue, title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech}, author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J}, booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7927--7931}, year={2022}, organization={IEEE} }
3
132
2022-09-19T18:07:59
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - cc0-1.0 - cc-by-4.0 multilinguality: - monolingual paperswithcode_id: slue pretty_name: SLUE (Spoken Language Understanding Evaluation benchmark) size_categories: - 10K<n<100K source_datasets: - original tags: [] task_categories: - automatic-speech-recognition - audio-classification - text-classification - token-classification task_ids: - sentiment-analysis - named-entity-recognition configs: - voxpopuli - voxceleb --- # Dataset Card for SLUE ## Table of Contents - [Dataset Card for SLUE](#dataset-card-for-slue) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Automatic Speech Recognition (ASR)](#automatic-speech-recognition-asr) - [Named Entity Recognition (NER)](#named-entity-recognition-ner) - [Sentiment Analysis (SA)](#sentiment-analysis-sa) - [How-to-submit for your test set evaluation](#how-to-submit-for-your-test-set-evaluation) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [voxpopuli](#voxpopuli) - [voxceleb](#voxceleb) - [Data Fields](#data-fields) - [voxpopuli](#voxpopuli-1) - [voxceleb](#voxceleb-1) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [SLUE-VoxPopuli Dataset](#slue-voxpopuli-dataset) - [SLUE-VoxCeleb Dataset](#slue-voxceleb-dataset) - [Original License of OXFORD VGG VoxCeleb Dataset](#original-license-of-oxford-vgg-voxceleb-dataset) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://asappresearch.github.io/slue-toolkit](https://asappresearch.github.io/slue-toolkit) - **Repository:** [https://github.com/asappresearch/slue-toolkit/](https://github.com/asappresearch/slue-toolkit/) - **Paper:** [https://arxiv.org/pdf/2111.10367.pdf](https://arxiv.org/pdf/2111.10367.pdf) - **Leaderboard:** [https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html](https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html) - **Size of downloaded dataset files:** 1.95 GB - **Size of the generated dataset:** 9.59 MB - **Total amount of disk used:** 1.95 GB ### Dataset Summary We introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to - Track research progress on multiple SLU tasks - Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks - Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use. For this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to [Toolkit](https://github.com/asappresearch/slue-toolkit) and [Paper](https://arxiv.org/pdf/2111.10367.pdf) for more details. ### Supported Tasks and Leaderboards #### Automatic Speech Recognition (ASR) Although this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER). #### Named Entity Recognition (NER) Named entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1. #### Sentiment Analysis (SA) Sentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.[More Information Needed] #### How-to-submit for your test set evaluation See here https://asappresearch.github.io/slue-toolkit/how-to-submit.html ### Languages The language data in SLUE is in English. ## Dataset Structure ### Data Instances #### voxpopuli - **Size of downloaded dataset files:** 398.45 MB - **Size of the generated dataset:** 5.81 MB - **Total amount of disk used:** 404.26 MB An example of 'train' looks as follows. ``` {'id': '20131007-0900-PLENARY-19-en_20131007-21:26:04_3', 'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/e35757b0971ac7ff5e2fcdc301bba0364857044be55481656e2ade6f7e1fd372/slue-voxpopuli/fine-tune/20131007-0900-PLENARY-19-en_20131007-21:26:04_3.ogg', 'array': array([ 0.00132601, 0.00058881, -0.00052187, ..., 0.06857217, 0.07835515, 0.07845446], dtype=float32), 'sampling_rate': 16000}, 'speaker_id': 'None', 'normalized_text': 'two thousand and twelve for instance the new brussels i regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in europe even if the employer is domiciled outside europe. the commission will', 'raw_text': '2012. For instance, the new Brussels I Regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in Europe, even if the employer is domiciled outside Europe. The Commission will', 'raw_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'], 'start': [227, 177, 28, 0], 'length': [6, 6, 21, 4]}, 'normalized_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'], 'start': [243, 194, 45, 0], 'length': [6, 6, 21, 23]}, 'raw_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'], 'start': [227, 177, 28, 0], 'length': [6, 6, 21, 4]}, 'normalized_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'], 'start': [243, 194, 45, 0], 'length': [6, 6, 21, 23]}} ``` #### voxceleb - **Size of downloaded dataset files:** 1.55 GB - **Size of the generated dataset:** 3.78 MB - **Total amount of disk used:** 1.55 GB An example of 'train' looks as follows. ``` {'id': 'id10059_229vKIGbxrI_00004', 'audio': {'path': '/Users/felixwu/.cache/huggingface/datasets/downloads/extracted/400facb6d2f2496ebcd58a5ffe5fbf2798f363d1b719b888d28a29b872751626/slue-voxceleb/fine-tune_raw/id10059_229vKIGbxrI_00004.flac', 'array': array([-0.00442505, -0.00204468, 0.00628662, ..., 0.00158691, 0.00100708, 0.00033569], dtype=float32), 'sampling_rate': 16000}, 'speaker_id': 'id10059', 'normalized_text': 'of god what is a creator the almighty that uh', 'sentiment': 'Neutral', 'start_second': 0.45, 'end_second': 4.52} ``` ### Data Fields #### voxpopuli - `id`: a `string` id of an instance. - `audio`: audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `speaker_id`: a `string` of the speaker id. - `raw_text`: a `string` feature that contains the raw transcription of the audio. - `normalized_text`: a `string` feature that contains the normalized transcription of the audio which is **used in the standard evaluation**. - `raw_ner`: the NER annotation of the `raw_text` using the same 18 NER classes as OntoNotes. - `normalized_ner`: the NER annotation of the `normalized_text` using the same 18 NER classes as OntoNotes. - `raw_combined_ner`: the NER annotation of the `raw_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`). - `normalized_combined_ner`: the NER annotation of the `normalized_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`) which is **used in the standard evaluation**. Each NER annotation is a dictionary containing three lists: `type`, `start`, and `length`. `type` is a list of the NER tag types. `start` is a list of the start character position of each named entity in the corresponding text. `length` is a list of the number of characters of each named entity. #### voxceleb - `id`: a `string` id of an instance. - `audio`: audio feature of the raw audio. Please use `start_second` and `end_second` to crop the transcribed segment. For example, `dataset[0]["audio"]["array"][int(dataset[0]["start_second"] * dataset[0]["audio"]["sample_rate"]):int(dataset[0]["end_second"] * dataset[0]["audio"]["sample_rate"])]`. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `speaker_id`: a `string` of the speaker id. - `normalized_text`: a `string` feature that contains the transcription of the audio segment. - `sentiment`: a `string` feature which can be `Negative`, `Neutral`, or `Positive`. - `start_second`: a `float` feature that specifies the start second of the audio segment. - `end_second`: a `float` feature that specifies the end second of the audio segment. ### Data Splits | |train|validation|test| |---------|----:|---------:|---:| |voxpopuli| 5000| 1753|1842| |voxceleb | 5777| 1454|3553| Here we use the standard split names in Huggingface's datasets, so the `train` and `validation` splits are the original `fine-tune` and `dev` splits of SLUE datasets, respectively. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information #### SLUE-VoxPopuli Dataset SLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (https://www.europarl.europa.eu/legal-notice/en/) Additionally, we provide named entity annotation (normalized_ner and raw_ner column in .tsv files) and it is covered with the same license as CC0. #### SLUE-VoxCeleb Dataset SLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset. ##### Original License of OXFORD VGG VoxCeleb Dataset VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube. VoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube. The speakers span a wide range of different ethnicities, accents, professions and ages. We provide Youtube URLs, associated face detections, and timestamps, as well as cropped audio segments and cropped face videos from the dataset. The copyright of both the original and cropped versions of the videos remains with the original owners. The data is covered under a Creative Commons Attribution 4.0 International license (Please read the license terms here. https://creativecommons.org/licenses/by/4.0/). Downloading this dataset implies agreement to follow the same conditions for any modification and/or re-distribution of the dataset in any form. Additionally any entity using this dataset agrees to the following conditions: THIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Please cite [1,2] below if you make use of the dataset. [1] J. S. Chung, A. Nagrani, A. Zisserman VoxCeleb2: Deep Speaker Recognition INTERSPEECH, 2018. [2] A. Nagrani, J. S. Chung, A. Zisserman VoxCeleb: a large-scale speaker identification dataset INTERSPEECH, 2017 ### Citation Information ``` @inproceedings{shon2022slue, title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech}, author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J}, booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7927--7931}, year={2022}, organization={IEEE} } ``` ### Contributions Thanks to [@fwu-asapp](https://github.com/fwu-asapp) for adding this dataset.
15,334
[ [ -0.0416259765625, -0.04119873046875, 0.006927490234375, 0.017608642578125, -0.00960540771484375, -0.006725311279296875, -0.0237579345703125, -0.031524658203125, 0.033935546875, 0.03167724609375, -0.044281005859375, -0.06317138671875, -0.0273284912109375, 0.0...
JosephusCheung/GuanacoDataset
2023-05-29T12:50:05.000Z
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:conversational", "language:zh", "language:en", "language:ja", "language:de", "license:gpl-3.0", "alpaca", "llama", "guanaco", "doi:10.57967/hf/0570", "region:us" ]
JosephusCheung
null
null
448
132
2023-03-16T06:30:22
--- license: gpl-3.0 task_categories: - text-generation - question-answering - conversational language: - zh - en - ja - de tags: - alpaca - llama - guanaco --- # GuanacoDataset **News: We're heading towards multimodal VQA, with blip2-flan-t5-xxl Alignment to Guannaco 7B LLM.** Still under construction: [GuanacoVQA weight](https://huggingface.co/JosephusCheung/GuanacoVQA) & [GuanacoVQA Dataset](https://huggingface.co/datasets/JosephusCheung/GuanacoVQADataset) **Notice: Effective immediately, the Guanaco and its associated dataset are now licensed under the GPLv3.** Released weights: - [Guanaco α](https://huggingface.co/JosephusCheung/Guanaco) The dataset for the [Guanaco model](https://guanaco-model.github.io/) is designed to enhance the multilingual capabilities and address various linguistic tasks. It builds upon the 175 tasks from the Alpaca model by providing rewrites of seed tasks in different languages and adding new tasks specifically designed for English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition. The dataset comprises a total of 534,530 entries, generated at a low cost of $6K. - Free chat dialogues without System input: 32,880 entries (recent update) - in English zh-Hans zh-Hant-TW Japanese Deutsch *To test 0-shot tasks of Japanese & Deutsch on original 175 tasks with finetuning on chat only.* - Chat dialogues with System input: 16,087 entries (recent update) - in English zh-Hans zh-Hant-TW zh-Hant-HK **A new additional dataset is released, separated and larger dataset is available for different languages.** The original 175 tasks were translated into 4 versions and regenerated independently: Below is the details of **mixed data**: - Japanese (Ja-JP - recently updated) 7,485 entries - Simplified Chinese (zh-Hans): 27,808 entries - Traditional Chinese (Taiwan) (zh-Hant-TW): 21,481 entries - Traditional Chinese (Hong Kong) (zh-Hant-HK): 19247 entries - English: 20K+ entries, not from Alpaca Besides, a mini version of 52K multi-lang dataset is released with: - Japanese (Ja-JP - recently updated) 7,485 entries - Simplified Chinese (zh-Hans): 5,439 entries - Traditional Chinese (Taiwan) (zh-Hant-TW): 9,322 entries - Traditional Chinese (Hong Kong) (zh-Hant-HK): 9,954 entries - English: 20,024 entries, not from Alpaca The mini version is included in the full non-chat dataset. **Additional dataset** *separated by language (temporary)*: *This additional dataset should only be used for additional training if using mixed data did not yield good results. Using it directly will not produce good results.* This part of the data will be merged into the main dataset at the appropriate time. - Chinese: 117,166 entries - Simplified Chinese (zh-Hans): 92,530 entries - Traditional Chinese (Taiwan) (zh-Hant-TW): 14,802 entries - Traditional Chinese (Hong Kong) (zh-Hant-HK): 9,834 entries - Japanese (Ja-JP - recently updated) 60,772 entries In addition to the language-specific tasks, the dataset includes new tasks that aim to improve the model's performance in English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition. These new tasks ensure that the Guanaco model is well-rounded and capable of handling a wide range of challenges in the field of natural language processing. By incorporating this diverse and comprehensive dataset into the Guanaco model, we aim to provide researchers and academics with a powerful tool for studying instruction-following language models in a multilingual context. The dataset's design encourages the development of more robust and versatile models capable of addressing complex linguistic tasks across different languages and domains. **Additional dataset** *Paper/General-QA*: The Paper/General-QA dataset is a collection of questions and answers constructed for AI-generated papers or general texts in English, Chinese, Japanese, and German. The question dataset contains 106,707 questions, and the answer dataset contains 99,292 answers. The purpose of this dataset is to generate paragraph-level answers to questions posed about lengthy documents such as PDFs. Similar questions are combined to form a tree-like structure, and graph theory algorithms are used to process user questions, content summaries, and contextual logic. *It is worth noting that some ChatGPT applications claim to be able to read PDFs, but they do not actually read the entire article. Instead, they compare the user's input question with segmented paragraphs of the article, select the most similar paragraph, and insert it as the answer. This is not true language model reading, but rather a form of deception.* **Note: I intentionally mixed up entries and languages to prevent anyone from solely selecting certain language entries for finetuning. This is not only unhelpful for the community, but also because some tasks are 0-shot in specific languages, please use the complete dataset for finetuning.** ## To-Do List: - Expand language support in the dataset: Incorporate additional languages such as Japanese, German, and more into the dataset. This expansion should include task examples that cover advanced grammar analysis and dialogue understanding for these languages. - Create a dialogue-oriented Chatbot dataset: Develop a dataset specifically designed for conversation-based applications, containing examples that facilitate the model's ability to engage in interactive and dynamic dialogues with users. - Add Toolformer-supporting tasks: Introduce tasks that train the model to autonomously call external APIs using Toolformer, allowing the model to access and utilize various web services and data sources, thereby enhancing its problem-solving capabilities. - Develop tasks for rapid integration of external knowledge: Design tasks that encourage the model to quickly incorporate knowledge from external sources such as search engines and artificial intelligence knowledge engines. These tasks would be particularly beneficial for smaller models with limited knowledge reserves, enabling them to efficiently utilize external information to respond to user queries. ## Recent News We've noticed a recent entrant in the field, the QLoRa method, which we find concerning due to its attempt to piggyback on the reputation of Guanaco. We strongly disapprove of such practices. QLoRa, as far as we can tell, lacks mathematical robustness and its performance significantly trails behind that of GPTQ and advancements such as PEFT fine-tuning, which have been successful in improving upon it. Guanaco has been diligent, consistently releasing multilingual datasets since March 2023, along with publishing weights that are not only an enhanced version of GPTQ but also support multimodal VQA and have been optimized for 4-bit. Despite the substantial financial investment of tens of thousands of dollars in distilling data from OpenAI's GPT models, we still consider these efforts to be incremental. We, however, aim to move beyond the incremental: 1. We strive to no longer rely on distillation data from OpenAI: We've found that relying on GPT-generated data impedes significant breakthroughs. Furthermore, this approach has proven to be disastrous when dealing with the imbalances in multilingual tasks. 2. We're focusing on the enhancement of quantization structure and partial native 4-bit fine-tuning: We are deeply appreciative of the GPTQ-Llama project for paving the way in state-of-the-art LLM quantization. Its unique qualities, especially at the 7B size, are facilitating significant progress in multilingual and multimodal tasks. 3. We plan to utilize visual data to adjust our language models: We believe this will fundamentally address the issues of language imbalance, translation inaccuracies, and the lack of graphical logic in LLM. While our work is still in the early stages, we're determined to break new ground in these areas. Our critique of QLoRa's practices does not stem from animosity but rather from the fundamental belief that innovation should be rooted in originality, integrity, and substantial progress.
8,212
[ [ -0.011138916015625, -0.06201171875, 0.0221710205078125, 0.024261474609375, -0.007781982421875, 0.0014200210571289062, -0.0195465087890625, -0.045440673828125, 0.0027065277099609375, 0.0309600830078125, -0.03753662109375, -0.048583984375, -0.034027099609375, ...
mstz/mushroom
2023-04-16T17:34:40.000Z
[ "task_categories:tabular-classification", "size_categories:1K<n<10K", "language:en", "license:cc", "mushroom", "tabular_classification", "binary_classification", "UCI", "region:us" ]
mstz
null
@misc{misc_mushroom_73, title = {{Mushroom}}, year = {1987}, howpublished = {UCI Machine Learning Repository}, note = {{DOI}: \\url{10.24432/C5959T}} }
0
132
2023-04-06T17:42:03
--- language: - en tags: - mushroom - tabular_classification - binary_classification - UCI pretty_name: Mushroom size_categories: - 1K<n<10K task_categories: - tabular-classification configs: - mushroom license: cc --- # Mushroom The [Mushroom dataset](https://archive.ics.uci.edu/ml/datasets/Mushroom) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets). # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|---------------------------| | mushroom | Binary classification | Is the mushroom poisonous?| # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/mushroom")["train"] ```
742
[ [ -0.004367828369140625, -0.032501220703125, 0.01187896728515625, 0.0162811279296875, -0.018096923828125, -0.021636962890625, -0.006786346435546875, -0.009246826171875, 0.02227783203125, 0.044769287109375, -0.0423583984375, -0.061248779296875, -0.051727294921875, ...
mattymchen/celeba-hq
2023-04-26T05:56:53.000Z
[ "region:us" ]
mattymchen
null
null
0
132
2023-04-26T05:15:42
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': female '1': male splits: - name: train num_bytes: 2731627350.0 num_examples: 28000 - name: validation num_bytes: 197550788.0 num_examples: 2000 download_size: 2762109745 dataset_size: 2929178138.0 --- # Dataset Card for "celeba-hq" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
539
[ [ -0.041839599609375, -0.0298614501953125, 0.002269744873046875, 0.0032100677490234375, -0.0006346702575683594, 0.00647735595703125, 0.006927490234375, -0.0149078369140625, 0.0626220703125, 0.0274658203125, -0.053436279296875, -0.05645751953125, -0.03631591796875,...
kz-transformers/multidomain-kazakh-dataset
2023-05-02T07:19:37.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "source_datasets:original", "language:kk", "language:ru", "license:apache-2.0", "region:us" ]
kz-transformers
null
null
9
132
2023-04-28T13:35:01
--- license: - apache-2.0 annotations_creators: - no-annotation language_creators: - found language: - kk - ru multilinguality: - multilingual source_datasets: - original task_categories: - text-generation - fill-mask pretty_name: MDBKD | Multi-Domain Bilingual Kazakh Dataset --- # Dataset Description **Point of Contact:** [Sanzhar Murzakhmetov](mailto:sanzharmrz@gmail.com), [Besultan Sagyndyk](mailto:nuxyjlbka@gmail.com) ### Dataset Summary MDBKD | Multi-Domain Bilingual Kazakh Dataset is a Kazakh-language dataset containing just over 24 883 808 unique texts from multiple domains. ### Supported Tasks - 'MLM/CLM': can be used to train a model for casual and masked languange modeling ### Languages The kk code for Kazakh as generally spoken in the Kazakhstan ### Data Instances For each instance, there is a string for the text and a string for the id. ```python {'text': 'Алматыда баспана қымбаттап жатыр Қазақстанда пәтер бағасы түсті Жыл басынан бері баспана бағасы 6,2%-ға қымбаттады Мегополистегі пәтер бағасына шолу. Алматыда пандемия басталғалы баспана қымбаттап барады. Мұның себебі нарықтағы сұраныстың көбеюімен және теңгенің құнсыздануымен байланысты, деп хабарлайды Atameken Business. Арна тілшісі Жания Әбдібек нарық өкілдерімен сұхбаттасып, мегополистегі пәтер бағасына шолу жасады. Толығырақ: Мамыр айында Қазақстанның жеті ірі қаласында пәтер бағасы түскен. Орта есеппен республика бойынша тұрғын үйдің 1 шаршы метрінің бағасы 292 мың 886 теңгені құрайды. '}, 'predicted_language': 'kaz', 'contains_kaz_symbols': 1, 'id': '0752b3ce-f5ea-4330-9c5f-e4fecf783b00'} ``` ### Data Fields - `text`: a string containing the content body - `predicted_language`: a string containing the predicted label of language for the text - `contains_kaz_symbols`: an integer containing flag of any kazakh symbol in text - `id`: a string which is a hexidecimal hash for text in split ### Data Splits The MDBKD has 5 splits: [_cc100-monolingual-crawled-data_](https://data.statmt.org/cc-100/), _kazakhBooks_, [_leipzig_](https://wortschatz.uni-leipzig.de/en/download/Kazakh), [_oscar_](https://oscar-project.github.io/documentation/versions/oscar-2301/) and _kazakhNews_. Below are the statistics of the dataset: | Dataset Split | Domain | Number of texts in Split | Number of tokens in Split | Number of unique tokens in Split | Median number of tokens in text | | -------------------------------|----------------------|------------------------------| --------------------------|----------------------------------|---------------------------------| | cc100-monolingual-crawled-data | Wikipedia articles | 19 635 580 | 441 623 321 | 6 217 337 | 12 | | kazakhBooks | Books | 8 423 | 351 433 586 | 7 245 720 | 40 264 | | leipzig | Articles/News | 1 706 485 | 26 494 864 | 1 109 113 | 14 | | oscar | CommonCrawl | 269 047 | 230 314 378 | 3 863 498 | 431 | | kazakhNews | News | 3 264 273 | 1 041 698 037 | 5 820 543 | 209 | With overall stats: | Stat | Value | |-------------------------|--------------| | Number of texts | 24 883 808 | | Number of tokens |2 091 564 186 | | Number of unique tokens | 17 802 998 | Full dataset takes **25GB** ### Annotations The dataset does not contain any additional annotations. ### Personal and Sensitive Information Dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset. ### Social Impact of Dataset The purpose of this dataset is to organize open-source datasets in Kazakh language for further research and commercial uses ### Licensing Information The Multi-Domain Bilingual kazakh dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ### Contributions Thanks to [@KindYAK](https://github.com/KindYAK), [@BeksultanSagyndyk](https://github.com/BeksultanSagyndyk), [@SanzharMrz](https://github.com/SanzharMrz) for adding this dataset. ---
4,605
[ [ -0.02325439453125, -0.03826904296875, 0.0076446533203125, 0.0184326171875, -0.042633056640625, 0.0106353759765625, -0.015655517578125, -0.020538330078125, 0.036041259765625, 0.0247650146484375, -0.034942626953125, -0.08306884765625, -0.053466796875, 0.014053...
zuzannad1/pixelsum_wiki
2023-09-13T11:42:49.000Z
[ "region:us" ]
zuzannad1
null
null
0
132
2023-05-16T13:39:49
--- dataset_info: features: - name: example dtype: string - name: summary dtype: string splits: - name: train num_bytes: 7401808572 num_examples: 6458670 download_size: 4591048930 dataset_size: 7401808572 --- # Dataset Card for "pixelsum_wiki" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
406
[ [ -0.054168701171875, -0.0080413818359375, 0.025726318359375, -0.00223541259765625, -0.01372528076171875, -0.006744384765625, 0.0137481689453125, -0.004405975341796875, 0.059661865234375, 0.0238494873046875, -0.06671142578125, -0.05194091796875, -0.03717041015625,...
hippocrates/DDI_RE
2023-10-04T19:08:58.000Z
[ "region:us" ]
hippocrates
null
null
0
132
2023-10-04T19:07:43
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
liyucheng/mmlu_test
2023-10-16T23:28:37.000Z
[ "region:us" ]
liyucheng
null
null
0
132
2023-10-16T23:28:24
--- dataset_info: features: - name: question dtype: string - name: answer dtype: string - name: A dtype: string - name: B dtype: string - name: C dtype: string - name: D dtype: string - name: id dtype: string - name: in-context examples dtype: string - name: testing input dtype: string - name: prompt dtype: string - name: task dtype: string splits: - name: train num_bytes: 90455312 num_examples: 13987 download_size: 14673948 dataset_size: 90455312 --- # Dataset Card for "mmlu_all_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
707
[ [ -0.04010009765625, -0.03973388671875, 0.01082611083984375, 0.0114288330078125, -0.006378173828125, -0.0090484619140625, 0.03057861328125, 0.001895904541015625, 0.06488037109375, 0.01499176025390625, -0.063232421875, -0.04669189453125, -0.037933349609375, -0....
jxu124/OpenX-Embodiment
2023-11-01T11:46:34.000Z
[ "task_categories:robotics", "task_categories:reinforcement-learning", "size_categories:1M<n<10M", "language:en", "license:cc-by-4.0", "Robotics", "region:us" ]
jxu124
null
null
3
132
2023-10-23T11:24:16
--- license: cc-by-4.0 task_categories: - robotics - reinforcement-learning language: - en tags: - Robotics pretty_name: Open X-Embodiment Dataset size_categories: - 1M<n<10M --- # Open X-Embodiment Dataset (unofficial) This is an unofficial Dataset Repo. This Repo is set up to make **Open X-Embodiment Dataset (55 in 1)** more accessible for people who love huggingface🤗. **Open X-Embodiment Dataset** is the largest open-source real robot dataset to date. It contains 1M+ real robot trajectories spanning 22 robot embodiments, from single robot arms to bi-manual robots and quadrupeds. More information is located on RT-X website (https://robotics-transformer-x.github.io/) . ### Usage Example ```python import datasets ds = datasets.load_dataset("jxu124/OpenX-Embodiment", "fractal20220817_data", streaming=True, split='train') # IterDataset ``` Optional subdatasets: ``` fractal20220817_data kuka bridge taco_play jaco_play berkeley_cable_routing roboturk nyu_door_opening_surprising_effectiveness viola berkeley_autolab_ur5 toto language_table columbia_cairlab_pusht_real stanford_kuka_multimodal_dataset_converted_externally_to_rlds nyu_rot_dataset_converted_externally_to_rlds stanford_hydra_dataset_converted_externally_to_rlds austin_buds_dataset_converted_externally_to_rlds nyu_franka_play_dataset_converted_externally_to_rlds maniskill_dataset_converted_externally_to_rlds furniture_bench_dataset_converted_externally_to_rlds cmu_franka_exploration_dataset_converted_externally_to_rlds ucsd_kitchen_dataset_converted_externally_to_rlds ucsd_pick_and_place_dataset_converted_externally_to_rlds austin_sailor_dataset_converted_externally_to_rlds austin_sirius_dataset_converted_externally_to_rlds bc_z usc_cloth_sim_converted_externally_to_rlds utokyo_pr2_opening_fridge_converted_externally_to_rlds utokyo_pr2_tabletop_manipulation_converted_externally_to_rlds utokyo_saytap_converted_externally_to_rlds utokyo_xarm_pick_and_place_converted_externally_to_rlds utokyo_xarm_bimanual_converted_externally_to_rlds robo_net berkeley_mvp_converted_externally_to_rlds berkeley_rpt_converted_externally_to_rlds kaist_nonprehensile_converted_externally_to_rlds stanford_mask_vit_converted_externally_to_rlds tokyo_u_lsmo_converted_externally_to_rlds dlr_sara_pour_converted_externally_to_rlds dlr_sara_grid_clamp_converted_externally_to_rlds dlr_edan_shared_control_converted_externally_to_rlds asu_table_top_converted_externally_to_rlds stanford_robocook_converted_externally_to_rlds eth_agent_affordances imperialcollege_sawyer_wrist_cam iamlab_cmu_pickup_insert_converted_externally_to_rlds uiuc_d3field utaustin_mutex berkeley_fanuc_manipulation cmu_playing_with_food cmu_play_fusion cmu_stretch berkeley_gnm_recon berkeley_gnm_cory_hall berkeley_gnm_sac_son ``` Optional subdatasets (Full Name): ``` RT-1 Robot Action QT-Opt Berkeley Bridge Freiburg Franka Play USC Jaco Play Berkeley Cable Routing Roboturk NYU VINN Austin VIOLA Berkeley Autolab UR5 TOTO Benchmark Language Table Columbia PushT Dataset Stanford Kuka Multimodal NYU ROT Stanford HYDRA Austin BUDS NYU Franka Play Maniskill Furniture Bench CMU Franka Exploration UCSD Kitchen UCSD Pick Place Austin Sailor Austin Sirius BC-Z USC Cloth Sim Tokyo PR2 Fridge Opening Tokyo PR2 Tabletop Manipulation Saytap UTokyo xArm PickPlace UTokyo xArm Bimanual Robonet Berkeley MVP Data Berkeley RPT Data KAIST Nonprehensile Objects QUT Dynamic Grasping Stanford MaskVIT Data LSMO Dataset DLR Sara Pour Dataset DLR Sara Grid Clamp Dataset DLR Wheelchair Shared Control ASU TableTop Manipulation Stanford Robocook ETH Agent Affordances Imperial Wrist Cam CMU Franka Pick-Insert Data QUT Dexterous Manpulation MPI Muscular Proprioception UIUC D3Field Austin Mutex Berkeley Fanuc Manipulation CMU Food Manipulation CMU Play Fusion CMU Stretch RECON CoryHall SACSoN RoboVQA ALOHA ``` ## Copyright Notice - This is an unofficial Dataset Repo. - Copyright 2023 DeepMind Technologies Limited - All software is licensed under the Apache License, Version 2.0 (Apache 2.0); you may not use this file except in compliance with the Apache 2.0 license. You may obtain a copy of the Apache 2.0 license at: https://www.apache.org/licenses/LICENSE-2.0 - All other materials are licensed under the Creative Commons Attribution 4.0 International License (CC-BY). You may obtain a copy of the CC-BY license at: https://creativecommons.org/licenses/by/4.0/legalcode - Unless required by applicable law or agreed to in writing, all software and materials distributed here under the Apache 2.0 or CC-BY licenses are distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the licenses for the specific language governing permissions and limitations under those licenses.
4,770
[ [ -0.034423828125, -0.04022216796875, 0.037506103515625, -0.00450897216796875, -0.002513885498046875, -0.0159912109375, -0.0142059326171875, -0.0196075439453125, 0.023651123046875, 0.0341796875, -0.070068359375, -0.0472412109375, -0.0321044921875, 0.0048065185...
rusheeliyer/german-courts
2023-11-01T10:50:44.000Z
[ "region:us" ]
rusheeliyer
null
null
0
132
2023-11-01T10:46:49
--- # For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/datasets-cards configs: - config_name: bundesfinanzhof data_files: - split: train path: data/Bundesfinanzhof_train.csv - split: test path: data/Bundesfinanzhof_test.csv - split: val path: data/Bundesfinanzhof_val.csv - config_name: bundesgerichtshof data_files: - split: train path: data/Bundesgerichtshof_train.csv - split: test path: data/Bundesgerichtshof_test.csv - split: val path: data/Bundesgerichtshof_val.csv - config_name: bundesarbeitsgericht data_files: - split: train path: data/Bundesarbeitsgericht_train.csv - split: test path: data/Bundesarbeitsgericht_test.csv - split: val path: data/Bundesarbeitsgericht_val.csv - config_name: bundessozialgericht data_files: - split: train path: data/Bundessozialgericht_train.csv - split: test path: data/Bundessozialgericht_test.csv - split: val path: data/Bundessozialgericht_val.csv - config_name: bundesverwaltungsgericht data_files: - split: train path: data/Bundesverwaltungsgericht_train.csv - split: test path: data/Bundesverwaltungsgericht_test.csv - split: val path: data/Bundesverwaltungsgericht_val.csv - config_name: bundesverfassungsgericht data_files: - split: train path: data/Bundesverfassungsgericht_train.csv - split: test path: data/Bundesverfassungsgericht_test.csv - split: val path: data/Bundesverfassungsgericht_val.csv --- # Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
5,969
[ [ -0.04034423828125, -0.0419921875, 0.00975799560546875, 0.0178070068359375, -0.030059814453125, -0.0089263916015625, -0.0026798248291015625, -0.048431396484375, 0.043212890625, 0.059478759765625, -0.05938720703125, -0.06951904296875, -0.042205810546875, 0.009...
senti_ws
2023-01-25T14:44:03.000Z
[ "task_categories:token-classification", "task_categories:text-classification", "task_ids:text-scoring", "task_ids:sentiment-scoring", "task_ids:part-of-speech", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual"...
null
SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, and pos-tagging. The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1].
@INPROCEEDINGS{remquahey2010, title = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis}, booktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)}, author = {Remus, R. and Quasthoff, U. and Heyer, G.}, year = {2010} }
1
131
2022-03-02T23:29:22
--- annotations_creators: - expert-generated - machine-generated language_creators: - found language: - de license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification - text-classification task_ids: - text-scoring - sentiment-scoring - part-of-speech pretty_name: SentiWS dataset_info: - config_name: pos-tagging features: - name: word dtype: string - name: pos-tag dtype: class_label: names: '0': NN '1': VVINF '2': ADJX '3': ADV splits: - name: train num_bytes: 75530 num_examples: 3471 download_size: 97748 dataset_size: 75530 - config_name: sentiment-scoring features: - name: word dtype: string - name: sentiment-score dtype: float32 splits: - name: train num_bytes: 61646 num_examples: 3471 download_size: 97748 dataset_size: 61646 --- # Dataset Card for SentiWS ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://wortschatz.uni-leipzig.de/en/download - **Repository:** [Needs More Information] - **Paper:** http://www.lrec-conf.org/proceedings/lrec2010/pdf/490_Paper.pdf - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, opinion mining etc. It lists positive and negative polarity bearing words weighted within the interval of [-1; 1] plus their part of speech tag, and if applicable, their inflections. The current version of SentiWS contains around 1,650 positive and 1,800 negative words, which sum up to around 16,000 positive and 18,000 negative word forms incl. their inflections, respectively. It not only contains adjectives and adverbs explicitly expressing a sentiment, but also nouns and verbs implicitly containing one. ### Supported Tasks and Leaderboards Sentiment-Scoring, Pos-Tagging ### Languages German ## Dataset Structure ### Data Instances For pos-tagging: ``` { "word":"Abbau" "pos_tag": 0 } ``` For sentiment-scoring: ``` { "word":"Abbau" "sentiment-score":-0.058 } ``` ### Data Fields SentiWS is UTF8-encoded text. For pos-tagging: - word: one word as a string, - pos_tag: the part-of-speech tag of the word as an integer, For sentiment-scoring: - word: one word as a string, - sentiment-score: the sentiment score of the word as a float between -1 and 1, The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1]. ### Data Splits train: 1,650 negative and 1,818 positive words ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License ### Citation Information @INPROCEEDINGS{remquahey2010, title = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis}, booktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)}, author = {Remus, R. and Quasthoff, U. and Heyer, G.}, year = {2010} } ### Contributions Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
5,009
[ [ -0.03887939453125, -0.0330810546875, 0.012359619140625, 0.0287322998046875, -0.0275726318359375, -0.004085540771484375, -0.0291748046875, -0.022064208984375, 0.04144287109375, 0.0257568359375, -0.07177734375, -0.0694580078125, -0.053863525390625, 0.009605407...
GroNLP/ik-nlp-22_slp
2023-02-01T18:25:21.000Z
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-retrieval", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unk...
GroNLP
Paragraphs from the Speech and Language Processing book (3ed) by Jurafsky and Martin extracted semi-automatically from Chapters 2 to 11 of the original book draft.
@book{slp3ed-iknlp2022, author = {Jurafsky, Daniel and Martin, James}, year = {2021}, month = {12}, pages = {1--235, 1--19}, title = {Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition}, volume = {3} }
0
131
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - question-answering - summarization - text-retrieval pretty_name: slp3ed-iknlp2022 tags: - question-generation --- # Dataset Card for IK-NLP-22 Speech and Language Processing ## Table of Contents - [Dataset Card for IK-NLP-22 Speech and Language Processing](#dataset-card-for-ik-nlp-22-speech-and-language-processing) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Projects](#projects) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Paragraphs Configuration](#paragraphs-configuration) - [Questions Configuration](#questions-configuration) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Source:** [Stanford](https://web.stanford.edu/~jurafsky/slp3/) - **Point of Contact:** [Gabriele Sarti](mmailto:ik-nlp-course@rug.nl) ### Dataset Summary This dataset contains chapters extracted from the Speech and Language Processing book (3ed draft of January 2022) by Jurafsky and Martin via a semi-automatic procedure (see below for additional details). Moreover, a small set of conceptual questions associated with each chapter is provided alongside possible answers. Only the content of chapters 2 to 11 of the book draft are provided, since these are the ones relevant to the contents of the 2022 edition of the Natural Language Processing course at the Information Science Master's Degree (IK) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) with the assistance of [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti). *The Speech and Language Processing book was made freely available by the authors [Dan Jurafsky](http://web.stanford.edu/people/jurafsky/) and [James H. Martin](http://www.cs.colorado.edu/~martin/) on the [Stanford University website](https://web.stanford.edu/~jurafsky/slp3/). The present dataset was created for educational purposes, and is based on the draft of the 3rd edition of the book accessed on December 29th, 2021. All rights of the present contents are attributed to the original authors.* ### Projects See the course page for a description of possible research directions. ### Languages The language data of Speech and Language Processing is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances The dataset contains two configurations: `paragraphs` (default), containing the full set of parsed paragraphs associated to the respective chapter and sections, and `questions`, containing a small subset of example questions matched with the relevant paragraph, and with the answer span extracted. #### Paragraphs Configuration The `paragraphs` configuration contains all the paragraphs of the selected book chapters, each associated with the respective chapter, section and subsection. An example from the `train` split of the `paragraphs` config is provided below. The example belongs to section 2.3 but not to a subsection, so the `n_subsection` and `subsection` fields are empty strings. ```json { "n_chapter": "2", "chapter": "Regular Expressions", "n_section": "2.3", "section": "Corpora", "n_subsection": "", "subsection": "", "text": "It's also quite common for speakers or writers to use multiple languages in a single communicative act, a phenomenon called code switching. Code switching (2.2) Por primera vez veo a @username actually being hateful! it was beautiful:)" } ``` The text is provided as-is, without further preprocessing or tokenization. #### Questions Configuration The `questions` configuration contains a small subset of questions, the top retrieved paragraph relevant to the question and the answer spans. An example from the `test` split of the `questions` config is provided below. ```json { "chapter": "Regular Expressions", "section": "Regular Expressions", "subsection": "Basic Regular Expressions", "question": "What is the meaning of the Kleene star in Regex?", "paragraph": "This language consists of strings with a b, followed by at least two a's, followed by an exclamation point. The set of operators that allows us to say things like \"some number of as\" are based on the asterisk or *, commonly called the Kleene * (gen-Kleene * erally pronounced \"cleany star\"). The Kleene star means \"zero or more occurrences of the immediately previous character or regular expression\". So /a*/ means \"any string of zero or more as\". This will match a or aaaaaa, but it will also match Off Minor since the string Off Minor has zero a's. So the regular expression for matching one or more a is /aa*/, meaning one a followed by zero or more as. More complex patterns can also be repeated. So /[ab]*/ means \"zero or more a's or b's\" (not \"zero or more right square braces\"). This will match strings like aaaa or ababab or bbbb.", "answer": "The Kleene star means \"zero or more occurrences of the immediately previous character or regular expression\"" } ``` ### Data Splits | config| train| test| |------------:|-----:|----:| |`paragraphs` | 1697 | - | |`questions` | - | 59 | ### Dataset Creation The contents of the Speech and Language Processing book PDF were extracted using the [PDF to S2ORC JSON Converter](https://github.com/allenai/s2orc-doc2json) by AllenAI. The texts extracted by the converter were then manually cleaned to remove end-of-chapter exercises and other irrelevant content (e.g. tables, TikZ figures, etc.). Some issues in the parsed content were preserved in the final version to maintain a naturalistic setting for the associated projects, promoting the use of data filtering heuristics for students. The question-answer pairs were created manually by Gabriele Sarti. ## Additional Information ### Dataset Curators For problems on this 🤗 Datasets version, please contact us at [ik-nlp-course@rug.nl](mailto:ik-nlp-course@rug.nl). ### Licensing Information Please refer to the authors' websites for licensing information. ### Citation Information Please cite the authors if you use these corpora in your work: ```bibtex @book{slp3ed-iknlp2022, author = {Jurafsky, Daniel and Martin, James}, year = {2021}, month = {12}, pages = {1--235, 1--19}, title = {Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition}, volume = {3} } ```
6,964
[ [ -0.0377197265625, -0.059906005859375, 0.00867462158203125, 0.01042938232421875, -0.0156402587890625, -0.0019445419311523438, -0.0198211669921875, -0.04461669921875, 0.01107025146484375, 0.043853759765625, -0.043731689453125, -0.04534912109375, -0.0341796875, ...
SetFit/rte
2022-02-28T12:46:43.000Z
[ "region:us" ]
SetFit
null
null
0
131
2022-03-02T23:29:22
# Glue RTE This dataset is a port of the official [`rte` dataset](https://huggingface.co/datasets/glue/viewer/rte/train) on the Hub. Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively. Also, the test split is not labeled; the label column values are always -1.
313
[ [ -0.033355712890625, -0.062744140625, -0.0020389556884765625, 0.0275726318359375, -0.01224517822265625, -0.00547027587890625, -0.0006666183471679688, -0.0161895751953125, 0.0648193359375, 0.03704833984375, -0.059661865234375, -0.01702880859375, -0.0465087890625, ...
codeparrot/codeparrot-clean-valid
2022-10-10T15:28:51.000Z
[ "region:us" ]
codeparrot
null
null
5
131
2022-03-02T23:29:22
# CodeParrot 🦜 Dataset Cleaned (valid) Train split of [CodeParrot 🦜 Dataset Cleaned](https://huggingface.co/datasets/lvwerra/codeparrot-clean). ## Dataset structure ```python DatasetDict({ train: Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 61373 }) }) ```
395
[ [ -0.033355712890625, -0.0130615234375, -0.0213165283203125, 0.0100860595703125, -0.03765869140625, 0.016021728515625, -0.017059326171875, 0.010040283203125, 0.032135009765625, 0.039642333984375, -0.027557373046875, -0.030914306640625, -0.02886962890625, 0.019...
BeIR/quora
2022-10-23T06:03:40.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
1
131
2022-06-05T16:53:54
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
13,988
[ [ -0.0396728515625, -0.03985595703125, 0.01094818115234375, 0.00363922119140625, 0.0042266845703125, 0.00008571147918701172, -0.0081939697265625, -0.018890380859375, 0.0216827392578125, 0.00595855712890625, -0.034332275390625, -0.054534912109375, -0.02639770507812...
juancavallotti/multilingual-gec
2023-01-06T18:59:59.000Z
[ "task_categories:translation", "size_categories:100K<n<1M", "language:en", "language:es", "language:fr", "language:de", "license:apache-2.0", "grammar", "gec", "multi language", "language detection", "region:us" ]
juancavallotti
null
null
2
131
2023-01-06T16:07:20
--- author: Juan Alberto López Cavallotti date: Jan 6, 2023 license: apache-2.0 task_categories: - translation language: - en - es - fr - de tags: - grammar - gec - multi language - language detection pretty_name: Multi Lingual Grammar Error Correction Dataset size_categories: - 100K<n<1M --- # Dataset Card for Multilingual Grammar Error Correction ## Dataset Description - **Homepage:** https://juancavallotti.com - **Paper:** https://blog.juancavallotti.com/2023/01/06/training-a-multi-language-grammar-error-correction-system/ - **Point of Contact:** Juan Alberto López Cavallotti ### Dataset Summary This dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German. This dataset was developed as a component for the [Squidigies](https://squidgies.app/) platform. ### Supported Tasks and Leaderboards * **Grammar Error Correction:** By appending the prefix *fix grammar:* to the prrompt. * **Language Detection:** By appending the prefix: *language:* to the prompt. ### Languages * English * Spanish * French * German ## Dataset Structure ### Data Instances The dataset contains the following instances for each language: * German 32282 sentences. * English 51393 sentences. * Spanish 67672 sentences. * French 67157 sentences. ### Data Fields * `lang`: The language of the sentence * `sentence`: The original sentence. * `modified`: The corrupted sentence. * `transformation`: The primary transformation used by the synthetic data generator. * `sec_transformation`: The secondary transformation (if any) used by the synthetic data generator. ### Data Splits * `train`: There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric. ## Dataset Creation ### Curation Rationale This dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet. ### Source Data #### Initial Data Collection and Normalization The source grammatical sentences come from various open-source datasets, such as Tatoeba. #### Who are the source language producers? * Juan Alberto López Cavallotti ### Annotations #### Annotation process The annotation is automatic and produced by the generation script. #### Who are the annotators? * Data generation script by Juan Alberto López Cavallotti ### Other Known Limitations The dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results. ## Additional Information ### Dataset Curators * Juan Alberto López Cavallotti ### Licensing Information This dataset is distributed under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0) ### Citation Information Please mention this original dataset and the author **Juan Alberto López Cavallotti** ### Contributions * Juan Alberto López Cavallotti
2,976
[ [ -0.0059967041015625, -0.05340576171875, 0.0173797607421875, 0.0496826171875, 0.01322174072265625, -0.0018167495727539062, -0.0274810791015625, -0.01058197021484375, 0.0228118896484375, 0.033355712890625, -0.05914306640625, -0.050201416015625, -0.039154052734375,...
EleutherAI/pythia-memorized-evals
2023-03-14T15:12:36.000Z
[ "region:us" ]
EleutherAI
null
null
2
131
2023-03-14T15:11:02
--- dataset_info: features: - name: index dtype: int64 - name: tokens sequence: int64 - name: __index_level_0__ dtype: int64 splits: - name: duped.1.4b num_bytes: 730820104 num_examples: 1373722 - name: deduped.1.4b num_bytes: 557587604 num_examples: 1048097 - name: duped.160m num_bytes: 366906036 num_examples: 689673 - name: deduped.160m num_bytes: 309195740 num_examples: 581195 - name: duped.12b num_bytes: 1267397432 num_examples: 2382326 - name: deduped.12b num_bytes: 995486380 num_examples: 1871215 - name: duped.70m num_bytes: 246822996 num_examples: 463953 - name: deduped.70m num_bytes: 218890336 num_examples: 411448 - name: duped.2.8b num_bytes: 891140964 num_examples: 1675077 - name: deduped.2.8b num_bytes: 720972252 num_examples: 1355211 - name: duped.410m num_bytes: 516221412 num_examples: 970341 - name: deduped.410m num_bytes: 431472748 num_examples: 811039 - name: duped.6.9b num_bytes: 1128355508 num_examples: 2120969 - name: deduped.6.9b num_bytes: 893916408 num_examples: 1680294 - name: duped.1b num_bytes: 668267012 num_examples: 1256141 - name: deduped.1b num_bytes: 549484180 num_examples: 1032865 download_size: 2931941971 dataset_size: 10492937112 --- # Dataset Card for "pythia-memorized-evals" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,547
[ [ -0.0194854736328125, -0.032928466796875, 0.020965576171875, 0.0051116943359375, -0.00939178466796875, 0.0181427001953125, 0.0066680908203125, 0.01340484619140625, 0.039520263671875, 0.0288238525390625, -0.036376953125, -0.05194091796875, -0.0164642333984375, ...
biu-nlp/abstract-sim-pubmed
2023-05-13T17:49:55.000Z
[ "region:us" ]
biu-nlp
null
null
2
131
2023-05-13T17:42:50
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
saattrupdan/womens-clothing-ecommerce-reviews
2023-05-25T20:18:53.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "multimodal", "region:us" ]
saattrupdan
null
null
0
131
2023-05-25T20:04:03
--- dataset_info: features: - name: review_text dtype: string - name: age dtype: int64 - name: rating dtype: int64 - name: positive_feedback_count dtype: int64 - name: division_name dtype: string - name: department_name dtype: string - name: class_name dtype: string - name: recommended_ind dtype: class_label: names: '0': '0' '1': '1' splits: - name: train num_bytes: 7811312.540347158 num_examples: 20641 - name: val num_bytes: 378436.72982642107 num_examples: 1000 - name: test num_bytes: 378436.72982642107 num_examples: 1000 download_size: 4357015 dataset_size: 8568186.0 task_categories: - text-classification language: - en tags: - multimodal pretty_name: Women's Clothing E-Commerce Reviews size_categories: - 1K<n<10K --- # Dataset Card for "womens-clothing-ecommerce-reviews" Processed version of [this dataset](https://github.com/ya-stack/Women-s-Ecommerce-Clothing-Reviews).
1,007
[ [ -0.0116119384765625, -0.04351806640625, -0.005115509033203125, 0.010772705078125, -0.044921875, 0.01129913330078125, 0.0135345458984375, -0.0380859375, 0.047027587890625, 0.061859130859375, -0.0888671875, -0.0743408203125, -0.01035308837890625, 0.00251007080...
veezbo/akkadian_english_corpus
2023-09-30T21:32:28.000Z
[ "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:mit", "region:us" ]
veezbo
null
null
1
131
2023-09-29T07:22:07
--- license: mit task_categories: - text-generation language: - en pretty_name: English-translated Akkadian Corpus size_categories: - 1K<n<10K --- # Akkadian English Corpus This dataset is a cleaned English-translated Akkadian language dataset. This dataset can and has been used for text generation tasks, for example to fine-tune LLMs. ## How it was generated Please visit my [repo](https://github.com/veezbo/akkadian_english_corpus) on Github which explains the steps that were taken to prepare this dataset for a text generation task. At a high level, these are steps that were taken: - Sourced a high-quality dataset of English-translated Akkadian by experts - Enforced a minimum line length - Removed duplicate lines - Removed textual notes and other generic notes within parantheses - Inserted translation notes and literal notes in place (preserving grammar and adding clarity to the corpus) ## Credit Credit for the aggregation of the raw data belongs to the [Akkademia](https://github.com/gaigutherz/Akkademia/tree/master) project. Specifically, the exact data file used as the starting dataset is linked [here](https://github.com/gaigutherz/Akkademia/blob/master/NMT_input/train.en) and was also used to train their SOTA neural machine translation Akkadian->English model as described in their recent [paper](https://academic.oup.com/pnasnexus/article/2/5/pgad096/7147349) Gutherz et al. 2023 [1]. Credit for the original source of the raw data belongs to the incredible Open Richly Annotated Cuneiform Corpus ([ORACC](http://oracc.org)) project [2]. Specifically, as noted by the Akkademia project above, the RINAP 1, 3, 4, and 5 datasets are the source of the original raw data. ## Citations [1] Gai Gutherz, Shai Gordin, Luis Sáenz, Omer Levy, Jonathan Berant, Translating Akkadian to English with neural machine translation, PNAS Nexus, Volume 2, Issue 5, May 2023, pgad096, https://doi.org/10.1093/pnasnexus/pgad096 [2] Jamie Novotny, Eleanor Robson, Steve Tinney, Niek Veldhuis, et al. Open Richly Annotated Cuneiform Corpus, http://oracc.org
2,067
[ [ -0.01332855224609375, -0.044464111328125, 0.0231170654296875, -0.0054168701171875, -0.0261993408203125, -0.00830841064453125, -0.0288848876953125, -0.0234832763671875, 0.015106201171875, 0.0634765625, -0.037139892578125, -0.052398681640625, -0.03363037109375, ...
peterbeamish/hack-cnn
2023-10-13T01:10:44.000Z
[ "source_datasets:github", "language:en", "license:other", "region:us" ]
peterbeamish
null
null
0
131
2023-10-12T22:15:54
--- language: - en license: other license_name: notouch license_details: notouch source_datasets: - github configs: - config_name: default splits: - name: train num_bytes: 725 num_examples: 2 - name: test num_bytes: 725 num_examples: 2 dataset_info: - config_name: default features: - name: highlights dtype: string - name: article dtype: string splits: - name: train num_bytes: 725 num_examples: 2 - name: test num_bytes: 725 num_examples: 2 download_size: 6468 dataset_size: 1450 --- # Readme hello! s
566
[ [ -0.02197265625, -0.036376953125, 0.039703369140625, 0.01508331298828125, -0.061859130859375, 0.023345947265625, 0.00975799560546875, -0.007778167724609375, 0.06268310546875, 0.06597900390625, -0.0482177734375, -0.0290069580078125, -0.06683349609375, 0.014419...
ncslgr
2022-11-03T16:16:28.000Z
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:n<1K", "source_datasets:original", "language:ase", "language:en", "license:mit", "region:us" ]
null
A small corpus of American Sign Language (ASL) video data from native signers, annotated with non-manual features.
@misc{dataset:databases2007volumes, title={Volumes 2--7}, author={Databases, NCSLGR}, year={2007}, publisher={American Sign Language Linguistic Research Project (Distributed on CD-ROM~…} }
4
130
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - ase - en license: - mit multilinguality: - translation size_categories: - n<1K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: NCSLGR dataset_info: - config_name: entire_dataset features: - name: eaf dtype: string - name: sentences sequence: - name: gloss dtype: string - name: text dtype: string - name: videos sequence: string splits: - name: train num_bytes: 783504 num_examples: 870 download_size: 4113829143 dataset_size: 783504 - config_name: annotations features: - name: eaf dtype: string - name: sentences sequence: - name: gloss dtype: string - name: text dtype: string - name: videos sequence: string splits: - name: train num_bytes: 371725 num_examples: 870 download_size: 5335358 dataset_size: 371725 --- # Dataset Card for NCSLGR ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.bu.edu/asllrp/ncslgr.html - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A small corpus of American Sign Language (ASL) video data from native signers, annotated with non-manual features. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - American Sign Language - English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - eaf: path to an ELAN annotation file - videos: sequence of strings to video paths - sentences: sequence of parallel sentences - gloss: American Sign Language gloss annotations - text: English text ### Data Splits None ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @misc{dataset:databases2007volumes, title={Volumes 2--7}, author={Databases, NCSLGR}, year={2007}, publisher={American Sign Language Linguistic Research Project (Distributed on CD-ROM~…} } ``` ### Contributions Thanks to [@AmitMY](https://github.com/AmitMY) for adding this dataset.
3,891
[ [ -0.016204833984375, -0.01371002197265625, -0.01088714599609375, 0.0136260986328125, -0.03326416015625, 0.022003173828125, -0.0162506103515625, -0.035308837890625, 0.04498291015625, 0.039154052734375, -0.048583984375, -0.085205078125, -0.05291748046875, 0.017...
xor_tydi_qa
2023-01-25T15:03:13.000Z
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "source_datasets:extended|tydiqa", "langu...
null
XOR-TyDi QA brings together for the first time information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and answer annotations that are retrieved from multilingual document collections. There are three sub-tasks: XOR-Retrieve, XOR-EnglishSpan, and XOR-Full.
@misc{asai2020xor, title={XOR QA: Cross-lingual Open-Retrieval Question Answering}, author={Akari Asai and Jungo Kasai and Jonathan H. Clark and Kenton Lee and Eunsol Choi and Hannaneh Hajishirzi}, year={2020}, eprint={2010.11856}, archivePrefix={arXiv}, primaryClass={cs.CL} }
1
130
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - expert-generated - found language: - ar - bn - fi - ja - ko - ru - te license: - mit multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original - extended|tydiqa task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: xor-tydi-qa pretty_name: XOR QA dataset_info: - config_name: xor-retrieve features: - name: question dtype: string - name: lang dtype: class_label: names: '0': ar '1': bn '2': fi '3': ja '4': ko '5': ru '6': te - name: answers dtype: string splits: - name: train num_bytes: 1698662 num_examples: 15250 - name: validation num_bytes: 259533 num_examples: 2110 - name: test num_bytes: 219046 num_examples: 2499 download_size: 3702288 dataset_size: 2177241 - config_name: xor-full features: - name: question dtype: string - name: lang dtype: class_label: names: '0': ar '1': bn '2': fi '3': ja '4': ko '5': ru '6': te - name: answers dtype: string splits: - name: train num_bytes: 7250913 num_examples: 61360 - name: validation num_bytes: 444672 num_examples: 3473 - name: test num_bytes: 706664 num_examples: 8176 download_size: 14018298 dataset_size: 8402249 --- # Dataset Card for XOR QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [XOR QA Homepage](https://nlp.cs.washington.edu/xorqa/) - **Repository:** [XOR QA Repository](https://github.com/AkariAsai/XORQA) - **Paper:** [XOR QA Paper](https://arxiv.org/abs/2010.11856) - **Leaderboard:** [XOR QA Leaderboard](https://nlp.cs.washington.edu/xorqa/) - **Point of Contact:** [Akari Asai](akari@cs.washington.edu) ### Dataset Summary XOR-TyDi QA brings together for the first time information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and answer annotations that are retrieved from multilingual document collections. ### Supported Tasks and Leaderboards There are three sub-tasks: XOR-Retrieve, XOR-EnglishSpan, and XOR-Full. - `XOR-retrieve`: XOR-Retrieve is a cross-lingual retrieval task where a question is written in a target language (e.g., Japanese) and a system is required to retrieve English paragraphs that answer the question. The dataset can be used to train a model for cross-lingual retrieval. Success on this task is typically measured by R@5kt, R@2kt (the recall by computing the fraction of the questions for which the minimal answer is contained in the top 5,000 / 2,000 tokens selected). This task has an active leaderboard which can be found at [leaderboard url](https://nlp.cs.washington.edu/xorqa/) - `XOR-English Span`: XOR-English Span is a cross-lingual retrieval task where a question is written in a target language (e.g., Japanese) and a system is required to output a short answer in English. The dataset can be used to train a model for cross-lingual retrieval. Success on this task is typically measured by F1, EM. This task has an active leaderboard which can be found at [leaderboard url](https://nlp.cs.washington.edu/xorqa/) - `XOR-Full`: XOR-Full is a cross-lingual retrieval task where a question is written in the target language (e.g., Japanese) and a system is required to output a short answer in a target language. Success on this task is typically measured by F1, EM, BLEU This task has an active leaderboard which can be found at [leaderboard url](https://nlp.cs.washington.edu/xorqa/) ### Languages The text in the dataset is available in 7 languages: Arabic `ar`, Bengali `bn`, Finnish `fi`, Japanese `ja`, Korean `ko`, Russian `ru`, Telugu `te` ## Dataset Structure ### Data Instances A typical data point comprises a `question`, it's `answer` the `language` of the question text and the split to which it belongs. ``` { "id": "-3979399588609321314", "question": "Сколько детей было у Наполео́на I Бонапа́рта?", "answers": ["сын"], "lang": "ru", "split": "train" } ``` ### Data Fields - `id`: An identifier for each example in the dataset - `question`: Open domain question - `answers`: The corresponding answer to the question posed - `lang`: BCP-47 language tag - `split`: identifier to differentiate train, validation and test splits ### Data Splits The data is split into a training, validation and test set for each of the two configurations. | | train | validation | test | |--------------|------:|-----------:|-----:| | XOR Retrieve | 15250 | 2113 | 2501 | | XOR Full | 61360 | 3179 | 8177 | ## Dataset Creation ### Curation Rationale This task framework reflects well real-world scenarios where a QA system uses multilingual document collections and answers questions asked by users with diverse linguistic and cultural backgrounds. Despite the common assumption that we can find answers in the target language, web re- sources in non-English languages are largely lim- ited compared to English (information scarcity), or the contents are biased towards their own cul- tures (information asymmetry). To solve these issues, XOR-TYDI QA (Asai et al., 2020) provides a benchmark for developing a multilingual QA system that finds answers in multiple languages. ### Source Data annotation pipeline consists of four steps: 1) collection of realistic questions that require cross-lingual ref- erences by annotating questions from TYDI QA without a same-language answer; 2) question translation from a target language to the pivot language of English where the missing informa- tion may exist; 3) answer span selection in the pivot language given a set of candidate documents; 4) answer verification and translation from the pivot language back to the original language. #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The Dataset is created by extending TyDiQA dataset and translating the questions into other languages. The answers are obtained by crowdsourcing the questions to Mechanical Turk workders ### Annotations #### Annotation process The English questions from TyDiQA are translated into other languages. The languages are chosen based on the availability of wikipedia data and the availability of tranlators. #### Who are the annotators? The translations are carried out using the professionla tranlation service (Gengo)[https://gengo.com] and the answers are annotated by MechanicalTurk workers ### Personal and Sensitive Information The dataset is created from wikipedia content and the QA task requires preserving the named entities, there by all the Wikipedia Named Entities are preserved in the data. Not much information has been provided about masking sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The people associated with the creation of the dataset are Akari Asai, Jungo Kasai, Jonathan H. Clark, Kenton Lee, Eunsol Choi, Hannaneh Hajishirzi ### Licensing Information XOR-TyDi QA is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license ### Citation Information ``` @article{xorqa, title = {XOR QA: Cross-lingual Open-Retrieval Question Answering}, author = {Akari Asai and Jungo Kasai and Jonathan H. Clark and Kenton Lee and Eunsol Choi and Hannaneh Hajishirzi} year = {2020} } ``` ### Contributions Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.
9,054
[ [ -0.03997802734375, -0.036773681640625, 0.003612518310546875, 0.0014257431030273438, -0.009674072265625, 0.01432037353515625, -0.0097808837890625, -0.0352783203125, 0.04248046875, 0.0245361328125, -0.04302978515625, -0.054534912109375, -0.0291748046875, 0.023...
Kira-Asimov/gender_clinical_trial
2022-02-10T10:16:03.000Z
[ "region:us" ]
Kira-Asimov
null
null
2
130
2022-03-02T23:29:22
# Gender classification from Clinical Trial Public Data
58
[ [ 0.00049591064453125, 0.00728607177734375, 0.025787353515625, 0.0450439453125, 0.029449462890625, -0.00748443603515625, 0.0018291473388671875, -0.002475738525390625, -0.007183074951171875, 0.0457763671875, 0.0005426406860351562, -0.08135986328125, -0.049926757812...
SocialGrep/the-reddit-covid-dataset
2022-07-01T18:40:57.000Z
[ "annotations_creators:lexyr", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
SocialGrep
This dataset attempts to capture the full extent of COVID-19 discussion across the entire site of Reddit. All posts and comments found to mention the term 'COVID' as of 2021-10-25 have been gathered from the site.
null
1
130
2022-03-02T23:29:22
--- annotations_creators: - lexyr language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original paperswithcode_id: null --- # Dataset Card for the-reddit-covid-dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Disclaimer Due to file size limitations, we are not able to provide the comments for this dataset. Please feel free to download them from the [website](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=theredditcoviddataset#the-reddit-covid-dataset) - no registration required. ## Dataset Description - **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=theredditcoviddataset) - **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditcoviddataset) ### Dataset Summary This corpus contains all the mentions of the term `covid` in post titles on the social media platform Reddit, up until the 25th of October, 2021. The data was procured from Reddit using [SocialGrep](https://socialgrep.com/?utm_source=huggingface&utm_medium=link&utm_campaign=theredditcoviddataset). ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'subreddit.name': the human-readable name of the data point's host subreddit. - 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information CC-BY v4.0 ### Contributions [Needs More Information]
4,297
[ [ -0.04052734375, -0.05950927734375, 0.0101318359375, 0.035064697265625, -0.032867431640625, 0.00128936767578125, -0.02099609375, -0.031524658203125, 0.060333251953125, 0.013519287109375, -0.06646728515625, -0.07269287109375, -0.05084228515625, 0.0178680419921...
classla/FRENK-hate-en
2022-10-21T07:52:06.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:other", "hate-speech-detection", "offensive-language", "arxiv:1906.02045", "region:us" ]
classla
The FRENK Datasets of Socially Unacceptable Discourse in English.
@misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} }
1
130
2022-03-02T23:29:22
--- language: - en license: - other size_categories: - 1K<n<10K task_categories: - text-classification task_ids: [] tags: - hate-speech-detection - offensive-language --- # Offensive language dataset of Croatian comments FRENK 1.0 English subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [Croatian subset](https://huggingface.co/datasets/5roop/FRENK-hate-hr), [Slovenian subset](https://huggingface.co/datasets/5roop/FRENK-hate-sl). ## Dataset Description - **Homepage:** http://hdl.handle.net/11356/1433 - **Repository:** http://hdl.handle.net/11356/1433 - **Paper:** https://arxiv.org/abs/1906.02045 - **Project page** https://nl.ijs.si/frenk/ ## Description of the original dataset The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments. The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments. For this dataset only the English data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split). ## Usage in `Transformers` ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-en","binary") ``` For binary classification the following encoding is used: ```python _CLASS_MAP_BINARY = { 'Acceptable': 0, 'Offensive': 1, } ``` The original labels are available if the dataset is loaded with the `multiclass` option: ```python import datasets ds = datasets.load_dataset("5roop/FRENK-hate-en","multiclass"). ``` In this case the encoding used is: ```python _CLASS_MAP_MULTICLASS = { 'Acceptable speech': 0, 'Inappropriate': 1, 'Background offensive': 2, 'Other offensive': 3, 'Background violence': 4, 'Other violence': 5, } ``` The original labels are available if the dataset is loaded with the `multiclass` option: ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-en","multiclass"). ``` In this case the encoding used is: ```python _CLASS_MAP_MULTICLASS = { 'Acceptable speech': 0, 'Inappropriate': 1, 'Background offensive': 2, 'Other offensive': 3, 'Background violence': 4, 'Other violence': 5, } ``` ## Data structure * `text`: text * `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic)) * `topic`: whether the text relates to lgbt or migrants hate-speech domains * `label`: label of the text instance, see above. ## Data instance ``` {'text': "Not everyone has the option of a rainbow reaction; I don't but wish I did.", 'target': 'No target', 'topic': 'lgbt', 'label': 0} ``` ## Licensing information CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0 ## Citation information When using this dataset please cite the following paper: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ``` The original dataset can be cited as ``` @misc{11356/1433, title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0}, author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}}, url = {http://hdl.handle.net/11356/1433}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0}, year = {2021} } ```
4,822
[ [ -0.036712646484375, -0.048248291015625, -0.00907135009765625, 0.031707763671875, -0.01070404052734375, -0.020660400390625, -0.038970947265625, -0.0307464599609375, 0.01079559326171875, 0.0210723876953125, -0.04254150390625, -0.05694580078125, -0.052337646484375,...
changpt/ko-lima-vicuna
2023-06-14T07:47:51.000Z
[ "task_categories:text-generation", "size_categories:n<1K", "language:ko", "license:cc-by-2.0", "KoLima", "region:us" ]
changpt
null
null
16
130
2023-06-14T03:58:58
--- license: cc-by-2.0 task_categories: - text-generation language: - ko size_categories: - n<1K pretty_name: KoLima(vicuna) tags: - KoLima --- # Ko Lima Vicuna Dataset GPT4 API를 사용하여 [lima_vicuna_format 데이터](https://huggingface.co/datasets/64bits/lima_vicuna_format)를 한국어로 재생성한 데이터셋입니다. GPT4 사용시 프롬프트는 "단순 번역이 아닌, 원문 내용을 참조하여 데이터를 재생성하라"는 내용으로 작성하였습니다. 원본 데이터셋 1,030건 중 866건의 데이터가 재생성되었습니다. *데이터셋 제작은 웹 크롤링 전문 기업, **[해시스크래퍼(HASHSCRAPER)](https://www.hashscraper.com/)** 에서 진행되었습니다.* *This is the lima_vicuna_format data regenerated into Korean using the GPT4 API.* *When using GPT4, the prompt was written to read "Regenerate the data by referring to the original content, not just a simple translation".* *Out of the 1,030 original datasets, 866 data points were regenerated.* *The dataset was created by [HASHSCRAPER](https://www.hashscraper.com/), a company specializing in web scraping.* --- # How to use ```python >>> from datasets import load_dataset >>> data = load_dataset("changpt/ko-lima-vicuna", data_files="ko_lima_vicuna.json", split="train") >>> data Dataset({ features: ['id', 'conversations'], num_rows: 866 }) ``` ```python >>> data[0] {'id': '7-stackexchange', 'conversations': [{'from': 'human', 'value': '나는 ```ffmpeg```로 비디오를 GIF 파일로 변환하고 있어:\n```ffmpeg \\\n -i input.flv \\\n -ss 00:00:00.000 \\\n -pix_fmt rgb24 \\\n -r 10 \\\n -s 320x240 \\\n -t 00:00:10.000 \\\n output.gif\n```\n잘 작동하는데, 출력된 gif 파일의 품질이 너무 낮아. 변환된 gif의 품질을 어떻게 개선할 수 있을까?'}, {'from': 'gpt', 'value': '중간 이미지 파일을 피하고 싶다면, LordNeckBeard가 제공한 명령어를 ```ffmpeg```와 ImageMagick의 ```convert``` 사이에 파이프로 연결할 수 있습니다. 이렇게 하면 중간 파일이 필요하지 않습니다:\n```ffmpeg -i input.flv -vf scale=320:-1 -r 10 -f image2pipe -vcodec ppm - | convert -delay 10 -loop 0 - output.gif\n```\n```-f image2pipe```는 ffmpeg에게 비디오를 이미지로 분할하고 파이프 출력에 적합하게 만들라고 지시하며, ```-vcodec ppm```은 출력 형식을 ppm으로 지정합니다(png 형식인 경우, ```convert```가 파이프에서 모든 이미지를 읽지 않거나 ffmpeg가 모두 출력하지 않는 것 같습니다). 두 명령어의 ```-```는 각각 출력과 입력에 파이프를 사용하도록 지정합니다.\n파일을 저장하지 않고 결과를 최적화하려면, ```convert```의 출력을 두 번째 ```convert``` 명령어로 파이프할 수 있습니다:\n```ffmpeg -i input.flv -vf scale=320:-1 -r 10 -f image2pipe -vcodec ppm - | convert -delay 10 -loop 0 - gif:- | convert -layers Optimize - output.gif\n```\n```gif:-```는 ```convert```에게 출력을 gif 형식의 데이터로 파이프하도록 지시하고, ```-layers Optimize```는 두 번째 ```convert```에게 ```optimize-frame``` 및 ```optimize-transparancy``` 방법을 수행하도록 지시합니다(ImageMagick 애니메이션 최적화 소개 참조). ```-layers Optimize```의 출력이 항상 더 작은 파일 크기를 제공하지는 않으므로, 먼저 최적화 없이 gif로 변환해 보는 것이 좋습니다.\n이 과정에서 모든 것이 메모리에 있으므로 이미지가 매우 큰 경우 충분한 메모리가 필요할 수 있습니다.'}]} ``` --- # License [CC BY 2.0 KR](https://creativecommons.org/licenses/by/2.0/kr/) [Open AI](https://openai.com/policies/terms-of-use)
2,749
[ [ -0.050384521484375, -0.0670166015625, 0.019012451171875, 0.0265350341796875, -0.045135498046875, -0.0186309814453125, -0.006683349609375, -0.0111083984375, 0.019622802734375, 0.0148468017578125, -0.040557861328125, -0.03851318359375, -0.042816162109375, 0.00...
bloyal/oas-paired-sequence-data
2023-10-26T17:14:13.000Z
[ "task_categories:fill-mask", "language:en", "license:cc-by-4.0", "region:us" ]
bloyal
null
null
0
130
2023-09-09T16:24:46
--- pretty_name: OAS paired sequences language: en task_categories: - fill-mask license: cc-by-4.0 configs: - config_name: human data_files: "human/*.parquet" - config_name: rat_SD data_files: "rat_SD/*.parquet" - config_name: mouse_BALB_c data_files: "mouse_BALB_c/*.parquet" - config_name: mouse_C57BL_6 data_files: "mouse_C57BL_6/*.parquet" --- # Dataset Card for OAS Paired Sequence Data ## Dataset Description - **Homepage:** - https://opig.stats.ox.ac.uk/webapps/oas/oas_paired/ ## Dataset Summary Paired heavy- and light-chain sequence information from the Observed Antibody Space (OAS) database, downloaded on September 9, 2023.
651
[ [ -0.016204833984375, -0.0233612060546875, 0.006298065185546875, -0.0296173095703125, -0.0223541259765625, -0.01526641845703125, 0.0228424072265625, -0.033782958984375, 0.056793212890625, 0.047698974609375, -0.034149169921875, -0.0328369140625, -0.0095596313476562...
ai4bharat/IN22-Gen
2023-09-12T11:13:23.000Z
[ "task_categories:translation", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:1K<n<10K", "language:as", "language:bn", "language:brx", "language:doi", "language:en", "language:gom", "language:gu", "language:hi", "langua...
ai4bharat
IN-22 is a newly created comprehensive benchmark for evaluating machine translation performance in multi-domain, n-way parallel contexts across 22 Indic languages. IN22-Gen is a general-purpose multi-domain evaluation subset of IN22. It has been created from two sources: Wikipedia and Web Sources offering diverse content spanning news, entertainment, culture, legal, and India-centric topics.
@article{ai4bharat2023indictrans2, title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages}, author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan}, year = {2023}, journal = {arXiv preprint arXiv: 2305.16307} }
1
130
2023-09-09T17:16:09
--- language: - as - bn - brx - doi - en - gom - gu - hi - kn - ks - mai - ml - mr - mni - ne - or - pa - sa - sat - sd - ta - te - ur language_details: >- asm_Beng, ben_Beng, brx_Deva, doi_Deva, eng_Latn, gom_Deva, guj_Gujr, hin_Deva, kan_Knda, kas_Arab, mai_Deva, mal_Mlym, mar_Deva, mni_Mtei, npi_Deva, ory_Orya, pan_Guru, san_Deva, sat_Olck, snd_Deva, tam_Taml, tel_Telu, urd_Arab license: cc-by-4.0 language_creators: - expert-generated multilinguality: - multilingual - translation pretty_name: in22-gen size_categories: - 1K<n<10K task_categories: - translation --- # IN22-Gen IN22 is a newly created comprehensive benchmark for evaluating machine translation performance in multi-domain, n-way parallel contexts across 22 Indic languages. IN22-Gen is a general-purpose multi-domain evaluation subset of IN22. It has been created from two sources: Wikipedia and Web Sources offering diverse content spanning news, entertainment, culture, legal, and India-centric topics. The evaluation subset consists of 1024 sentences translated across 22 Indic languages enabling evaluation of MT systems across 506 directions. Here is the domain and source distribution of our IN22-Gen evaluation subset. <table style="width: 40%"> <tr> <td>domain</td> <td>web sources</td> <td>wikipedia</td> </tr> <tr> <td>culture</td> <td>40</td> <td>40</td> </tr> <tr> <td>economy</td> <td>40</td> <td>40</td> </tr> <tr> <td>education</td> <td>40</td> <td>40</td> </tr> <tr> <td>entertainment</td> <td>40</td> <td>40</td> </tr> <tr> <td>geography</td> <td>40</td> <td>40</td> </tr> <tr> <td>governments</td> <td>40</td> <td>40</td> </tr> <tr> <td>health</td> <td>40</td> <td>40</td> </tr> <tr> <td>industry</td> <td>40</td> <td>40</td> </tr> <tr> <td>legal</td> <td>40</td> <td>40</td> </tr> <tr> <td>news</td> <td>32</td> <td>32</td> </tr> <tr> <td>religion</td> <td>40</td> <td>40</td> </tr> <tr> <td>sports</td> <td>40</td> <td>40</td> </tr> <tr> <td>tourism</td> <td>40</td> <td>40</td> </tr> <tr> <td>total</td> <td>512</td> <td>512</td> </tr> </table> Please refer to the `Appendix E: Dataset Card` of the [preprint](https://arxiv.org/abs/2305.16307) on detailed description of dataset curation, annotation and quality control process. ### Dataset Structure #### Dataset Fields - `id`: Row number for the data entry, starting at 1. - `context`: Context window of 3 sentences, typically includes one sentence before and after the candidate sentence. - `source`: The source from which the candidate sentence is considered. - `url`: The URL for the English article from which the sentence was extracted. Only available for candidate sentences sourced from Wikipedia - `domain`: The domain of the sentence. - `num_words`: The number of words in the candidate sentence. - `bucket`: Classification of the candidate sentence as per predefined bucket categories. - `sentence`: The full sentence in the specific language (may have _lang for pairings) #### Data Instances A sample from the `gen` split for the English language (`eng_Latn` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits. ```python { "id": 1, "context": "A uniform is often viewed as projecting a positive image of an organisation. Maintaining personal hygiene is also an important aspect of personal appearance and dressing. An appearance is a bunch of attributes related with the service person, like their shoes, clothes, tie, jewellery, hairstyle, make-up, watch, cosmetics, perfume, etc.", "source": "web", "url": "", "domain": "culture", "num_words": 24, "bucket": "18 - 25", "sentence": "An appearance is a bunch of attributes related to the service person, like their shoes, clothes, tie, jewellery, hairstyle, make-up, watch, cosmetics, perfume, etc." } ``` When using a hyphenated pairing or using the `all` function, data will be presented as follows: ```python { "id": 1, "context": "A uniform is often viewed as projecting a positive image of an organisation. Maintaining personal hygiene is also an important aspect of personal appearance and dressing. An appearance is a bunch of attributes related with the service person, like their shoes, clothes, tie, jewellery, hairstyle, make-up, watch, cosmetics, perfume, etc.", "source": "web", "url": "", "domain": "culture", "num_words": 24, "bucket": "18 - 25", "sentence_eng_Latn": "An appearance is a bunch of attributes related to the service person, like their shoes, clothes, tie, jewellery, hairstyle, make-up, watch, cosmetics, perfume, etc.", "sentence_hin_Deva": "सेवा संबंधी लोगों के लिए भेष कई गुणों का संयोजन है, जैसे कि उनके जूते, कपड़े, टाई, आभूषण, केश शैली, मेक-अप, घड़ी, कॉस्मेटिक, इत्र, आदि।" } ``` ### Usage Instructions ```python from datasets import load_dataset # download and load all the pairs dataset = load_dataset("ai4bharat/IN22-Gen", "all") # download and load specific pairs dataset = load_dataset("ai4bharat/IN22-Gen", "eng_Latn-hin_Deva") ``` ### Languages Covered <table style="width: 40%"> <tr> <td>Assamese (asm_Beng)</td> <td>Kashmiri (Arabic) (kas_Arab)</td> <td>Punjabi (pan_Guru)</td> </tr> <tr> <td>Bengali (ben_Beng)</td> <td>Kashmiri (Devanagari) (kas_Deva)</td> <td>Sanskrit (san_Deva)</td> </tr> <tr> <td>Bodo (brx_Deva)</td> <td>Maithili (mai_Deva)</td> <td>Santali (sat_Olck)</td> </tr> <tr> <td>Dogri (doi_Deva)</td> <td>Malayalam (mal_Mlym)</td> <td>Sindhi (Arabic) (snd_Arab)</td> </tr> <tr> <td>English (eng_Latn)</td> <td>Marathi (mar_Deva)</td> <td>Sindhi (Devanagari) (snd_Deva)</td> </tr> <tr> <td>Konkani (gom_Deva)</td> <td>Manipuri (Bengali) (mni_Beng)</td> <td>Tamil (tam_Taml)</td> </tr> <tr> <td>Gujarati (guj_Gujr)</td> <td>Manipuri (Meitei) (mni_Mtei)</td> <td>Telugu (tel_Telu)</td> </tr> <tr> <td>Hindi (hin_Deva)</td> <td>Nepali (npi_Deva)</td> <td>Urdu (urd_Arab)</td> </tr> <tr> <td>Kannada (kan_Knda)</td> <td>Odia (ory_Orya)</td> </tr> </table> ### Citation If you consider using our work then please cite using: ``` @article{ai4bharat2023indictrans2, title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages}, author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan}, year = {2023}, journal = {arXiv preprint arXiv: 2305.16307} } ```
7,349
[ [ -0.0341796875, -0.03515625, 0.01198577880859375, 0.034332275390625, -0.0221710205078125, 0.01215362548828125, -0.01025390625, -0.0277252197265625, 0.0166168212890625, 0.0150146484375, -0.035247802734375, -0.039031982421875, -0.038909912109375, 0.04150390625,...
igbo_ner
2022-11-03T16:16:30.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ig", "license:unknown", "arxiv:2004.00648", "region:us" ]
null
Igbo Named Entity Recognition Dataset
@misc{ezeani2020igboenglish, title={Igbo-English Machine Translation: An Evaluation Benchmark}, author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, year={2020}, eprint={2004.00648}, archivePrefix={arXiv}, primaryClass={cs.CL} }
0
129
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - ig license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: null pretty_name: Igbo NER dataset dataset_info: - config_name: ner_data features: - name: content_n dtype: string - name: named_entity dtype: string - name: sentences sequence: string splits: - name: train num_bytes: 60315228 num_examples: 30715 download_size: 3311204 dataset_size: 60315228 - config_name: free_text features: - name: sentences dtype: string splits: - name: train num_bytes: 1172152 num_examples: 10000 download_size: 1132151 dataset_size: 1172152 --- # Dataset Card for Igbo NER dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner - **Repository:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner - **Paper:** https://arxiv.org/abs/2004.00648 ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here is an example from the dataset: ``` {'content_n': 'content_0', 'named_entity': 'Ike Ekweremmadụ', 'sentences': ['Ike Ekweremmadụ', "Ike ịda jụụ otụ nkeji banyere oke ogbugbu na-eme n'ala Naijiria agwụla Ekweremmadụ"]} ``` ### Data Fields - content_n : ID - named_entity : Name of the entity - sentences : List of sentences for the entity ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @misc{ezeani2020igboenglish, title={Igbo-English Machine Translation: An Evaluation Benchmark}, author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, year={2020}, eprint={2004.00648}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset.
3,923
[ [ -0.039398193359375, -0.04278564453125, -0.006023406982421875, 0.030975341796875, -0.0163726806640625, -0.0024318695068359375, -0.0270538330078125, -0.0274658203125, 0.044647216796875, 0.038818359375, -0.06396484375, -0.06475830078125, -0.05377197265625, 0.02...
multi_booked
2023-06-01T14:59:47.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:ca", "language:eu", "license:cc-by-3.0", "arxiv:1803.08614"...
null
MultiBooked is a corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification. The corpora are compiled from hotel reviews taken mainly from booking.com. The corpora are in Kaf/Naf format, which is an xml-style stand-off format that allows for multiple layers of annotation. Each review was sentence- and word-tokenized and lemmatized using Freeling for Catalan and ixa-pipes for Basque. Finally, for each language two annotators annotated opinion holders, opinion targets, and opinion expressions for each review, following the guidelines set out in the OpeNER project.
@inproceedings{Barnes2018multibooked, author={Barnes, Jeremy and Lambert, Patrik and Badia, Toni}, title={MultiBooked: A corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18)}, year = {2018}, month = {May}, date = {7-12}, address = {Miyazaki, Japan}, publisher = {European Language Resources Association (ELRA)}, language = {english} }
0
129
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - ca - eu license: - cc-by-3.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: multibooked pretty_name: MultiBooked dataset_info: - config_name: ca features: - name: text sequence: - name: wid dtype: string - name: sent dtype: string - name: para dtype: string - name: word dtype: string - name: terms sequence: - name: tid dtype: string - name: lemma dtype: string - name: morphofeat dtype: string - name: pos dtype: string - name: target sequence: string - name: opinions sequence: - name: oid dtype: string - name: opinion_holder_target sequence: string - name: opinion_target_target sequence: string - name: opinion_expression_polarity dtype: class_label: names: '0': StrongNegative '1': Negative '2': Positive '3': StrongPositive - name: opinion_expression_target sequence: string splits: - name: train num_bytes: 1952731 num_examples: 567 download_size: 4429415 dataset_size: 1952731 - config_name: eu features: - name: text sequence: - name: wid dtype: string - name: sent dtype: string - name: para dtype: string - name: word dtype: string - name: terms sequence: - name: tid dtype: string - name: lemma dtype: string - name: morphofeat dtype: string - name: pos dtype: string - name: target sequence: string - name: opinions sequence: - name: oid dtype: string - name: opinion_holder_target sequence: string - name: opinion_target_target sequence: string - name: opinion_expression_polarity dtype: class_label: names: '0': StrongNegative '1': Negative '2': Positive '3': StrongPositive - name: opinion_expression_target sequence: string splits: - name: train num_bytes: 1175816 num_examples: 343 download_size: 4429415 dataset_size: 1175816 config_names: - ca - eu --- # Dataset Card for MultiBooked ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://hdl.handle.net/10230/33928 - **Repository:** https://github.com/jerbarnes/multibooked - **Paper:** https://arxiv.org/abs/1803.08614 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary MultiBooked is a corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification. The corpora are compiled from hotel reviews taken mainly from booking.com. The corpora are in Kaf/Naf format, which is an xml-style stand-off format that allows for multiple layers of annotation. Each review was sentence- and word-tokenized and lemmatized using Freeling for Catalan and ixa-pipes for Basque. Finally, for each language two annotators annotated opinion holders, opinion targets, and opinion expressions for each review, following the guidelines set out in the OpeNER project. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Each sub-dataset is monolingual in the languages: - ca: Catalan - eu: Basque ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `text`: layer of the original text. - `wid`: list of word IDs for each word within the example. - `sent`: list of sentence IDs for each sentence within the example. - `para`: list of paragraph IDs for each paragraph within the example. - `word`: list of words. - `terms`: layer of the terms resulting from the analysis of the original text (lemmatization, morphological, PoS tagging) - `tid`: list of term IDs for each term within the example. - `lemma`: list of lemmas. - `morphofeat`: list of morphological features. - `pos`: list of PoS tags. - `target`: list of sublists of the corresponding word IDs (normally, the sublists contain only one element, in a one-to-one correspondence between words and terms). - `opinions`: layer of the opinions in the text. - `oid`: list of opinion IDs - `opinion_holder_target`: list of sublists of the corresponding term IDs that span the opinion holder. - `opinion_target_target`: list of sublists of the corresponding term IDs that span the opinion target. - `opinion_expression_polarity`: list of the opinion expression polarities. The polarity can take one of the values: `StrongNegative`, `Negative`, `Positive`, or `StrongPositive`. - `opinion_expression_target`: list of sublists of the corresponding term IDs that span the opinion expression. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Dataset is under the [CC-BY 3.0](https://creativecommons.org/licenses/by/3.0/) license. ### Citation Information ``` @inproceedings{Barnes2018multibooked, author={Barnes, Jeremy and Lambert, Patrik and Badia, Toni}, title={MultiBooked: A corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18)}, year = {2018}, month = {May}, date = {7-12}, address = {Miyazaki, Japan}, publisher = {European Language Resources Association (ELRA)}, language = {english} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
7,425
[ [ -0.039306640625, -0.0445556640625, 0.00496673583984375, 0.021148681640625, -0.0188140869140625, -0.00046706199645996094, -0.0307159423828125, -0.0237579345703125, 0.035400390625, 0.04888916015625, -0.047119140625, -0.07757568359375, -0.037078857421875, 0.017...
polemo2
2023-01-25T14:42:43.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:bsd-3-clause", "region:us" ]
null
The PolEmo2.0 is a set of online reviews from medicine and hotels domains. The task is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation.
@inproceedings{kocon-etal-2019-multi, title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews", author = "Koco{\'n}, Jan and Milkowski, Piotr and Za{\'s}ko-Zieli{\'n}ska, Monika", booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/K19-1092", doi = "10.18653/v1/K19-1092", pages = "980--991", }
0
129
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - bsd-3-clause multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: polemo2 dataset_info: - config_name: in features: - name: sentence dtype: string - name: target dtype: class_label: names: '0': __label__meta_amb '1': __label__meta_minus_m '2': __label__meta_plus_m '3': __label__meta_zero splits: - name: train num_bytes: 4810215 num_examples: 5783 - name: test num_bytes: 582052 num_examples: 722 - name: validation num_bytes: 593530 num_examples: 723 download_size: 2350339 dataset_size: 5985797 - config_name: out features: - name: sentence dtype: string - name: target dtype: class_label: names: '0': __label__meta_amb '1': __label__meta_minus_m '2': __label__meta_plus_m '3': __label__meta_zero splits: - name: train num_bytes: 4810215 num_examples: 5783 - name: test num_bytes: 309790 num_examples: 494 - name: validation num_bytes: 310977 num_examples: 494 download_size: 2139891 dataset_size: 5430982 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://clarin-pl.eu/dspace/handle/11321/710 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The PolEmo2.0 is a set of online reviews from medicine and hotels domains. The task is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sentence: string, the review - target: sentiment of the sentence class The same tag system is used in plWordNet Emo for lexical units: [+m] (strong positive), [+s] (weak positive), [-m] (strong negative), [-s] (weak negative), [amb] (ambiguous) and [0] (neutral). Note that the test set doesn't have targets so -1 is used instead ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information [More Information Needed] ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
4,410
[ [ -0.041961669921875, -0.0460205078125, 0.0167083740234375, 0.0225372314453125, -0.0218658447265625, 0.004627227783203125, -0.0282135009765625, -0.0321044921875, 0.04595947265625, 0.045135498046875, -0.065185546875, -0.0792236328125, -0.050048828125, 0.0181884...
roman_urdu
2023-01-25T14:43:17.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ur", "license:unknown", "region:us" ]
null
This is an extensive compilation of Roman Urdu Dataset (Urdu written in Latin/Roman script) tagged for sentiment analysis.
@InProceedings{Sharf:2018, title = "Performing Natural Language Processing on Roman Urdu Datasets", authors = "Zareen Sharf and Saif Ur Rahman", booktitle = "International Journal of Computer Science and Network Security", volume = "18", number = "1", pages = "141-148", year = "2018" } @misc{Dua:2019, author = "Dua, Dheeru and Graff, Casey", year = "2017", title = "{UCI} Machine Learning Repository", url = "http://archive.ics.uci.edu/ml", institution = "University of California, Irvine, School of Information and Computer Sciences" }
1
129
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - found language: - ur license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: roman-urdu-data-set pretty_name: Roman Urdu Dataset dataset_info: features: - name: sentence dtype: string - name: sentiment dtype: class_label: names: '0': Positive '1': Negative '2': Neutral splits: - name: train num_bytes: 1633423 num_examples: 20229 download_size: 1628349 dataset_size: 1633423 --- # Dataset Card for Roman Urdu Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set) - **Point of Contact:** [Zareen Sharf](mailto:zareensharf76@gmail.com) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Urdu ## Dataset Structure [More Information Needed] ### Data Instances ``` Wah je wah,Positive, ``` ### Data Fields Each row consists of a short Urdu text, followed by a sentiment label. The labels are one of `Positive`, `Negative`, and `Neutral`. Note that the original source file is a comma-separated values file. * `sentence`: A short Urdu text * `label`: One of `Positive`, `Negative`, and `Neutral`, indicating the polarity of the sentiment expressed in the sentence ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @InProceedings{Sharf:2018, title = "Performing Natural Language Processing on Roman Urdu Datasets", authors = "Zareen Sharf and Saif Ur Rahman", booktitle = "International Journal of Computer Science and Network Security", volume = "18", number = "1", pages = "141-148", year = "2018" } @misc{Dua:2019, author = "Dua, Dheeru and Graff, Casey", year = "2017", title = "{UCI} Machine Learning Repository", url = "http://archive.ics.uci.edu/ml", institution = "University of California, Irvine, School of Information and Computer Sciences" } ``` ### Contributions Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
4,101
[ [ -0.032745361328125, -0.026885986328125, -0.0022945404052734375, 0.030975341796875, -0.016876220703125, 0.01384735107421875, -0.03253173828125, -0.00876617431640625, 0.0244903564453125, 0.036468505859375, -0.042877197265625, -0.0772705078125, -0.05645751953125, ...
CLUTRR/v1
2022-10-25T10:03:19.000Z
[ "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:unknown", "arxiv:1908.06177", "region:us" ]
CLUTRR
CLUTRR (Compositional Language Understanding and Text-based Relational Reasoning), a diagnostic benchmark suite, is first introduced in (https://arxiv.org/abs/1908.06177) to test the systematic generalization and inductive reasoning capabilities of NLU systems.
@article{sinha2019clutrr, Author = {Koustuv Sinha and Shagun Sodhani and Jin Dong and Joelle Pineau and William L. Hamilton}, Title = {CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text}, Year = {2019}, journal = {Empirical Methods of Natural Language Processing (EMNLP)}, arxiv = {1908.06177} }
2
129
2022-03-09T19:33:00
--- language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K --- # Dataset Card for CLUTRR ## Table of Contents ## Dataset Description ### Dataset Summary **CLUTRR** (**C**ompositional **L**anguage **U**nderstanding and **T**ext-based **R**elational **R**easoning), a diagnostic benchmark suite, is first introduced in (https://arxiv.org/abs/1908.06177) to test the systematic generalization and inductive reasoning capabilities of NLU systems. The CLUTRR benchmark allows us to test a model’s ability for **systematic generalization** by testing on stories that contain unseen combinations of logical rules, and test for the various forms of **model robustness** by adding different kinds of superfluous noise facts to the stories. ### Dataset Task CLUTRR contains a large set of semi-synthetic stories involving hypothetical families. The task is to infer the relationship between two family members, whose relationship is not explicitly mentioned in the given story. Join the CLUTRR community in https://www.cs.mcgill.ca/~ksinha4/clutrr/ ## Dataset Structure We show detailed information for all 14 configurations of the dataset. ### configurations: **id**: a unique series of characters and numbers that identify each instance <br> **story**: one semi-synthetic story involving hypothetical families<br> **query**: the target query/relation which contains two names, where the goal is to classify the relation that holds between these two entities<br> **target**: indicator for the correct relation for the query <br> **target_text**: text for the correct relation for the query <br> the indicator follows the rule as follows: <br> "aunt": 0, "son-in-law": 1, "grandfather": 2, "brother": 3, "sister": 4, "father": 5, "mother": 6, "grandmother": 7, "uncle": 8, "daughter-in-law": 9, "grandson": 10, "granddaughter": 11, "father-in-law": 12, "mother-in-law": 13, "nephew": 14, "son": 15, "daughter": 16, "niece": 17, "husband": 18, "wife": 19, "sister-in-law": 20 <br> **clean\_story**: the story without noise factors<br> **proof\_state**: the logical rule of the kinship generation <br> **f\_comb**: the kinships of the query followed by the logical rule<br> **task\_name**: the task of the sub-dataset in a form of "task_[num1].[num2]"<br> The first number [num1] indicates the status of noise facts added in the story: 1- no noise facts; 2- Irrelevant facts*; 3- Supporting facts*; 4- Disconnected facts*.<br> The second number [num2] directly indicates the length of clauses for the task target.<br> *for example:*<br> *task_1.2 -- task requiring clauses of length 2 without adding noise facts*<br> *task_2.3 -- task requiring clauses of length 3 with Irrelevant noise facts added in the story*<br> **story\_edges**: all the edges in the kinship graph<br> **edge\_types**: similar to the f\_comb, another form of the query's kinships followed by the logical rule <br> **query\_edge**: the corresponding edge of the target query in the kinship graph<br> **genders**: genders of names appeared in the story<br> **task\_split**: train,test <br> *Further explanation of Irrelevant facts, Supporting facts and Disconnected facts can be found in the 3.5 Robust Reasoning section in https://arxiv.org/abs/1908.06177 ### Data Instances An example of 'train'in Task 1.2 looks as follows. ``` { "id": b2b9752f-d7fa-46a9-83ae-d474184c35b6, "story": "[Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday.", "query": ('April', 'Ashley'), "target": 7, "target_text": "grandmother", "clean_story": [Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday., "proof_state": [{('April', 'grandmother', 'Ashley'): [('April', 'mother', 'Lillian'), ('Lillian', 'mother', 'Ashley')]}], "f_comb": "mother-mother", "task_name": "task_1.2", "story_edges": [(0, 1), (1, 2)], "edge_types": ['mother', 'mother'], "query_edge": (0, 2), "genders": "April:female,Lillian:female,Ashley:female", "task_split": trian } ``` ### Data Splits #### Data Split Name (corresponding with the name used in the paper) | task_split | split name in paper | train &validation task |test task | | :---: | :---: | :-: | :-: | | gen_train23_test2to10 | data_089907f8 | 1.2, 1.3 | 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 | | gen_train234_test2to10 | data_db9b8f04 | 1.2, 1.3, 1.4| 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 | | rob_train_clean_23_test_all_23 | data_7c5b0e70 | 1.2,1.3 | 1.2, 1.3, 2.3, 3.3, 4.3 | | rob_train_sup_23_test_all_23 | data_06b8f2a1 | 2.2, 2.3 | 2.2, 2.3, 1.3, 3.3, 4.3 | | rob_train_irr_23_test_all_23 | data_523348e6 | 3.2, 3.3 | 3.2, 3.3, 1.3, 2.3, 4.3 | | rob_train_disc_23_test_all_23 | data_d83ecc3e | 4.2, 4.3 | 4.2, 4.3, 1.3, 2.3, 3.3 | #### Data Split Summary Number of Instances in each split | task_split | train | validation | test | | :-: | :---: | :---: | :---: | | gen_train23_test2to10 | 9074 | 2020 | 1146 | | gen_train234_test2to10 | 12064 | 3019 | 1048 | | rob_train_clean_23_test_all_23 | 8098 | 2026 | 447 | | rob_train_disc_23_test_all_23 | 8080 | 2020 | 445 | | rob_train_irr_23_test_all_23 | 8079 | 2020 | 444 | | rob_train_sup_23_test_all_23 | 8123 | 2031 | 447 | ## Citation Information ``` @article{sinha2019clutrr, Author = {Koustuv Sinha and Shagun Sodhani and Jin Dong and Joelle Pineau and William L. Hamilton}, Title = {CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text}, Year = {2019}, journal = {Empirical Methods of Natural Language Processing (EMNLP)}, arxiv = {1908.06177} } ```
5,676
[ [ -0.0213470458984375, -0.046722412109375, 0.0259552001953125, 0.014251708984375, -0.01558685302734375, -0.0162200927734375, -0.004669189453125, -0.0237579345703125, 0.00945281982421875, 0.0278778076171875, -0.056640625, -0.05126953125, -0.03997802734375, 0.01...
ywchoi/pubmed_abstract_6
2022-09-13T01:09:44.000Z
[ "region:us" ]
ywchoi
null
null
0
129
2022-09-13T01:08:00
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
edarchimbaud/timeseries-1m-stocks
2023-11-01T10:03:07.000Z
[ "task_categories:tabular-regression", "language:en", "license:mit", "region:us" ]
edarchimbaud
null
null
1
129
2023-05-29T13:50:59
--- language: - en license: mit task_categories: - tabular-regression dataset_info: features: - name: symbol dtype: string - name: datetime dtype: timestamp[ns] - name: open dtype: float64 - name: high dtype: float64 - name: low dtype: float64 - name: close dtype: float64 - name: volume dtype: float64 splits: - name: train num_bytes: 183342773 num_examples: 3280232 download_size: 83830959 dataset_size: 183342773 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "timeseries-1mn-sp500" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://edarchimbaud.substack.com - **Repository:** https://github.com/edarchimbaud - **Point of Contact:** contact@edarchimbaud.com ### Dataset Summary The "timeseries-1mn-sp500" dataset provides one-minute time-series data for the S&P 500 index constituents. ### Supported Tasks and Leaderboards This dataset is suitable for tasks such as time-series forecasting, volatility prediction, and high-frequency trading strategy development. ### Languages [N/A] ## Dataset Structure ### Data Instances [N/A] ### Data Fields - symbol (string): The ticker symbol or abbreviation used to identify the company. - datetime (timestamp): The date and time of the stock quote, in nanoseconds. - open (float64): The opening price of the stock at the given datetime. - high (float64): The highest price of the stock during the given minute. - low (float64): The lowest price of the stock during the given minute. - close (float64): The closing price of the stock at the given datetime. - volume (float64): The volume of the stock traded during the given minute. ### Data Splits [N/A] ## Dataset Creation ### Curation Rationale The "timeseries-1mn-sp500" dataset was created to support high-frequency trading algorithms and time-series forecasting models. ### Source Data #### Initial Data Collection and Normalization The data was sourced from the web and normalized. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [N/A] ## Considerations for Using the Data ### Social Impact of Dataset [N/A] ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators The timeseries-1mn-sp500 dataset was collected by https://edarchimbaud.substack.com. ### Licensing Information The timeseries-1mn-sp500 dataset is licensed under the MIT License. ### Citation Information > https://edarchimbaud.substack.com, timeseries-daily-sp500 dataset, GitHub repository, https://github.com/edarchimbaud ### Contributions Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset.
3,874
[ [ -0.04510498046875, -0.027313232421875, -0.010467529296875, 0.03790283203125, -0.025482177734375, 0.0017871856689453125, 0.004238128662109375, -0.0101318359375, 0.0540771484375, 0.024322509765625, -0.08990478515625, -0.056304931640625, -0.038330078125, -0.000...
dmayhem93/agieval-lsat-lr
2023-06-18T17:26:20.000Z
[ "license:mit", "arxiv:2304.06364", "arxiv:2104.06598", "region:us" ]
dmayhem93
null
null
0
129
2023-06-18T12:50:37
--- dataset_info: features: - name: query dtype: string - name: choices sequence: string - name: gold sequence: int64 splits: - name: test num_bytes: 923886 num_examples: 510 download_size: 469904 dataset_size: 923886 license: mit --- # Dataset Card for "agieval-lsat-lr" Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo. Raw datset: https://github.com/zhongwanjun/AR-LSAT MIT License Copyright (c) 2022 Wanjun Zhong Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. @misc{zhong2023agieval, title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, year={2023}, eprint={2304.06364}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{zhong2021arlsat, title={AR-LSAT: Investigating Analytical Reasoning of Text}, author={Wanjun Zhong and Siyuan Wang and Duyu Tang and Zenan Xu and Daya Guo and Jiahai Wang and Jian Yin and Ming Zhou and Nan Duan}, year={2021}, eprint={2104.06598}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{wang2022lsat, title={From lsat: The progress and challenges of complex reasoning}, author={Wang, Siyuan and Liu, Zhongkun and Zhong, Wanjun and Zhou, Ming and Wei, Zhongyu and Chen, Zhumin and Duan, Nan}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, year={2022}, publisher={IEEE} }
2,548
[ [ -0.035552978515625, -0.048065185546875, 0.0204010009765625, 0.01171112060546875, -0.01153564453125, -0.016448974609375, 0.0018024444580078125, -0.035491943359375, 0.005146026611328125, 0.035308837890625, -0.03564453125, -0.019012451171875, -0.0297698974609375, ...
alzoubi36/privacy_qa
2023-06-24T07:54:51.000Z
[ "region:us" ]
alzoubi36
null
null
0
129
2023-06-24T07:53:01
--- dataset_info: features: - name: question dtype: string - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 31955449 num_examples: 157420 - name: validation num_bytes: 5661628 num_examples: 27780 - name: test num_bytes: 13381983 num_examples: 62150 download_size: 17138117 dataset_size: 50999060 --- # Dataset for the PrivacyQA task in the [PrivacyGLUE](https://github.com/infsys-lab/privacy-glue) dataset
500
[ [ -0.0079498291015625, -0.018707275390625, 0.01497650146484375, -0.0006842613220214844, 0.023162841796875, 0.0250396728515625, 0.026275634765625, 0.00457000732421875, 0.0163421630859375, 0.0435791015625, -0.0631103515625, -0.05938720703125, -0.024322509765625, ...
Nexusflow/NexusRaven_API_evaluation
2023-09-29T05:19:42.000Z
[ "arxiv:2306.05301", "arxiv:2307.16789", "region:us" ]
Nexusflow
null
null
3
129
2023-09-28T07:58:02
--- dataset_info: - config_name: outputs_in_toolllm_format features: - name: response list: - name: function_call dtype: string - name: query dtype: string - name: task_id dtype: int64 - name: timestamp dtype: float64 splits: - name: train num_bytes: 303376 num_examples: 348 download_size: 83053 dataset_size: 303376 - config_name: raw_api_list features: - name: dataset dtype: string - name: name dtype: string - name: description dtype: string - name: args_dicts list: - name: default dtype: 'null' - name: description dtype: string - name: name dtype: string - name: required dtype: bool - name: type dtype: string splits: - name: train num_bytes: 22276 num_examples: 2 download_size: 10949 dataset_size: 22276 - config_name: raw_queries features: - name: dataset dtype: string - name: query_dict dtype: string splits: - name: train num_bytes: 466227 num_examples: 339 download_size: 98527 dataset_size: 466227 - config_name: standardized_api_list features: - name: dataset dtype: string - name: name dtype: string - name: description dtype: string - name: args_dicts list: - name: default dtype: string - name: description dtype: string - name: name dtype: string - name: required dtype: bool - name: type dtype: string splits: - name: train num_bytes: 47776 num_examples: 65 download_size: 27751 dataset_size: 47776 - config_name: standardized_queries features: - name: dataset dtype: string - name: prompt dtype: string - name: python_function_name dtype: string - name: python_args_dict dtype: string - name: context_functions sequence: string splits: - name: train num_bytes: 153860 num_examples: 318 download_size: 36721 dataset_size: 153860 configs: - config_name: outputs_in_toolllm_format data_files: - split: train path: outputs_in_toolllm_format/train-* - config_name: raw_queries data_files: - split: train path: raw_queries/train-* - config_name: standardized_api_list data_files: - split: train path: standardized_api_list/train-* - config_name: standardized_queries data_files: - split: train path: standardized_queries/train-* --- # NexusRaven API Evaluation dataset Please see [blog post](http://nexusflow.ai/blog) or [NexusRaven Github repo](https://github.com/nexusflowai/NexusRaven) for more information. ## License The evaluation data in this repository consists primarily of our own curated evaluation data that only uses open source commercializable models. However, we include general domain data from the ToolLLM and ToolAlpaca papers. Since the data in the ToolLLM and ToolAlpaca works use OpenAI's GPT models for the generated content, the data is not commercially licensable, even if our own data is. As a result, the evaluation data used here is strictly non-commercial under [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/). Thank you for understanding! ## References We thank the following authors and entities for their evaluation data, which we leveraged to produce the results contained in this repository. Their citations can be found below 1. ToolAlpaca team 2. ToolLLM team ``` @misc{tang2023toolalpaca, title={ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases}, author={Qiaoyu Tang and Ziliang Deng and Hongyu Lin and Xianpei Han and Qiao Liang and Boxi Cao and Le Sun}, year={2023}, eprint={2306.05301}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{qin2023toolllm, title={ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs}, author={Yujia Qin and Shihao Liang and Yining Ye and Kunlun Zhu and Lan Yan and Yaxi Lu and Yankai Lin and Xin Cong and Xiangru Tang and Bill Qian and Sihan Zhao and Runchu Tian and Ruobing Xie and Jie Zhou and Mark Gerstein and Dahai Li and Zhiyuan Liu and Maosong Sun}, year={2023}, eprint={2307.16789}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` ## Citation ``` @misc{nexusraven, title={NexusRaven: Surpassing the state-of-the-art in open-source function calling LLMs}, author={Nexusflow.ai team}, year={2023}, url={http://nexusflow.ai/blog} } ``` ## Contact Please reach out to info@nexusflow.ai for any questions!
4,545
[ [ -0.00995635986328125, -0.036407470703125, 0.045623779296875, 0.0241546630859375, -0.026336669921875, 0.00565338134765625, -0.0220489501953125, -0.0169677734375, -0.0037937164306640625, 0.03485107421875, -0.05133056640625, -0.051055908203125, -0.03363037109375, ...
lchakkei/OpenOrca-Traditional-Chinese-Text
2023-10-15T02:10:20.000Z
[ "region:us" ]
lchakkei
null
null
0
129
2023-10-10T16:37:00
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 6870338733 num_examples: 4233915 download_size: 3986331717 dataset_size: 6870338733 --- # Dataset Card for "OpenOrca-Traditional-Chinese-Text" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
473
[ [ -0.0254974365234375, -0.030731201171875, -0.004657745361328125, 0.02166748046875, -0.0233612060546875, -0.01094818115234375, -0.019439697265625, -0.0308990478515625, 0.05157470703125, 0.045867919921875, -0.036041259765625, -0.073974609375, -0.0198516845703125, ...
nlewins/onetalk_questions_full_audio
2023-10-13T09:58:36.000Z
[ "region:us" ]
nlewins
null
null
0
129
2023-10-13T09:55:31
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: en dtype: string - name: audio_transcription dtype: audio: sampling_rate: 16000 - name: transcription dtype: string splits: - name: train num_bytes: 3755259707.6 num_examples: 2700 - name: test num_bytes: 325319916.0 num_examples: 300 download_size: 3662094847 dataset_size: 4080579623.6 --- # Dataset Card for "onetalk_questions_fullA" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
758
[ [ -0.045257568359375, -0.044830322265625, 0.01617431640625, 0.038604736328125, -0.0168304443359375, -0.0159912109375, 0.019500732421875, -0.0081634521484375, 0.07501220703125, 0.0511474609375, -0.06292724609375, -0.05706787109375, -0.0296630859375, -0.02838134...
Narya-ai/relevancy-summary-synthetic-dataset
2023-10-14T13:27:43.000Z
[ "region:us" ]
Narya-ai
null
null
0
129
2023-10-14T13:27:36
--- dataset_info: features: - name: summary dtype: string - name: relevant sequence: string - name: irrelevant sequence: string splits: - name: train num_bytes: 6011298 num_examples: 5496 download_size: 2202251 dataset_size: 6011298 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "relevancy-summary-synthetic-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
548
[ [ -0.0297088623046875, -0.02838134765625, 0.020050048828125, 0.0149993896484375, -0.0165557861328125, 0.0220794677734375, 0.01885986328125, -0.01340484619140625, 0.08319091796875, 0.0259246826171875, -0.06658935546875, -0.052764892578125, -0.036865234375, -0.0...
gooaq
2023-01-25T14:31:10.000Z
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:2104.08727", "region...
null
GooAQ is a large-scale dataset with a variety of answer types. This dataset contains over 5 million questions and 3 million answers collected from Google. GooAQ questions are collected semi-automatically from the Google search engine using its autocomplete feature. This results in naturalistic questions of practical interest that are nonetheless short and expressed using simple language. GooAQ answers are mined from Google's responses to our collected questions, specifically from the answer boxes in the search results. This yields a rich space of answer types, containing both textual answers (short and long) as well as more structured ones such as collections.
@article{gooaq2021, title={GooAQ: Open Question Answering with Diverse Answer Types}, author={Khashabi, Daniel and Ng, Amos and Khot, Tushar and Sabharwal, Ashish and Hajishirzi, Hannaneh and Callison-Burch, Chris}, journal={arXiv preprint}, year={2021} }
3
128
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - machine-generated language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: gooaq pretty_name: 'GooAQ: Open Question Answering with Diverse Answer Types' dataset_info: features: - name: id dtype: int32 - name: question dtype: string - name: short_answer dtype: string - name: answer dtype: string - name: answer_type dtype: class_label: names: '0': feat_snip '1': collection '2': knowledge '3': unit_conv '4': time_conv '5': curr_conv splits: - name: train num_bytes: 974320061 num_examples: 3112679 - name: validation num_bytes: 444553 num_examples: 2500 - name: test num_bytes: 445810 num_examples: 2500 download_size: 2111358901 dataset_size: 975210424 --- # Dataset Card for GooAQ ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GooAQ 🥑: Google Answers to Google Questions!](https://github.com/allenai/gooaq) - **Repository:** [GooAQ 🥑: Google Answers to Google Questions!](https://github.com/allenai/gooaq) - **Paper:** [GOOAQ: Open Question Answering with Diverse Answer Types](https://arxiv.org/abs/2104.08727) - **Point of Contact:** [Daniel Khashabi](danielk@allenai.org) ### Dataset Summary GooAQ is a large-scale dataset with a variety of answer types. This dataset contains over 5 million questions and 3 million answers collected from Google. GooAQ questions are collected semi-automatically from the Google search engine using its autocomplete feature. This results in naturalistic questions of practical interest that are nonetheless short and expressed using simple language. GooAQ answers are mined from Google's responses to our collected questions, specifically from the answer boxes in the search results. This yields a rich space of answer types, containing both textual answers (short and long) as well as more structured ones such as collections. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset contains samples in English only. ## Dataset Structure ### Data Instances Each row of the data file should look like this: ``` { "id": 3339543, "question": "what is the difference between collagen and whey protein?", "short_answer": None, "answer": "The main differences between the amino acid profiles of whey and collagen are that whey contains all 9 essential amino acids, while collagen only has 8. ... Collagen is a fibrous protein found in the skin, cartilage, and bones of animals whereas whey comes from milk.", "answer_type": "feat_snip" } ``` where the questions `question` are collected via Google auto-complete. The answers responses (`short_answer` and `answer`) were collected from Google's answer boxes. The answer types (`answer_type`) are inferred based on the html content of Google's response. Here is the dominant types in the current dataset: - `feat_snip`: explanatory responses; the majoriy the question/responses are of this type. - `collection`: list responses (e.g., steps to accomplish something). - `knowledge`: typically short responses for knowledge seeking questions. - `unit_conv`: questions about converting units. - `time_conv`: questions about converting times. - `curr_conv`: questions about converting currencies. Dataset instances which are not part of dominant types are marked with -1 label. ### Data Fields - `id`: an `int` feature. - `question`: a `string` feature. - `short_answer`: a `string` feature (could be None as well in some cases). - `answer`: a `string` feature (could be None as well in some cases). - `answer_type`: a `string` feature. ### Data Splits Number of samples in train/validation/test set are given below: | Split | Number of samples | |------------|-------------------| | Train | 3112679 | | Validation | 2500 | | Test | 2500 | ## Dataset Creation ### Curation Rationale While day-to-day questions come with a variety of answer types, the current question-answering (QA) literature has failed to adequately address the answer diversity of questions. Many of the everyday questions that humans deal with and pose to search engines have a more diverse set of responses. Their answer can be a multi-sentence description (a snippet) (e.g., ‘what is’ or ‘can you’ questions), a collection of items such as ingredients (‘what are’, ‘things to’) or of steps towards a goal such as unlocking a phone (‘how to’), etc. Even when the answer is short, it can have richer types, e.g., unit conversion, time zone conversion, or various kinds of knowledge look-up (‘how much’, ‘when is’, etc.). Such answer type diversity is not represented in any existing dataset. ### Source Data #### Initial Data Collection and Normalization Construction this dataset involved two main steps, extracting questions from search auto-complete and extracting answers from answer boxes. 1) Query Extraction: To extract a rich yet natural set of questions they used Google auto-completion. They start with a seed set of question terms (e.g., “who”, “where”, etc.). They bootstrap based on this set, by repeatedly querying prefixes of previously extracted questions, in order to discover longer and richer sets of questions. Such questions extracted from the autocomplete algorithm are highly reflective of popular questions posed by users of Google. They filter out any questions shorter than 5 tokens as they are often in-complete questions. This process yields over ∼5M questions, which were collected over a span of 6 months. The average length of the questions is about 8 tokens. 2) Answer Extraction: They rely on the Google answer boxes shown on top of the search results when the questions are issued to Google. There are a variety of answer boxes. The most common kind involves highlighted sentences (extracted from various websites) that contain the answer to a given question. These form the snippet and collection answers in GOOAQ. In some cases, the answer box shows the answer directly, possibly in addition to the textual snippet. These form theshort answers in GOOAQ. They first scrape the search results for all questions. This is the main extraction bottleneck, which was done over a span of 2 months. Subsequently, they extract answer strings from the HTML content of the search results. Answer types are also inferred at this stage, based on the HTML tags around the answer. #### Who are the source language producers? Answered above. ### Annotations #### Annotation process Answered in above section. #### Who are the annotators? Since their task is focused on English, they required workers to be based in a country with a population predominantly of native English speakers (e.g., USA, Canada, UK, and Australia) and have completed at least 5000 HITs with ≥ 99% assignment approval rate. Additionally, they have a qualification test with half-a-dozen questions all of which need to be answered correctly by the annotators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases To prevent biased judgements, they also ask the annotators to avoid using Google search (which is what they used when mined GOOAQ) when annotating the quality of shown instances. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. ### Citation Information ``` @article{gooaq2021, title={GooAQ: Open Question Answering with Diverse Answer Types}, author={Khashabi, Daniel and Ng, Amos and Khot, Tushar and Sabharwal, Ashish and Hajishirzi, Hannaneh and Callison-Burch, Chris}, journal={arXiv preprint}, year={2021} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
9,407
[ [ -0.044677734375, -0.0777587890625, 0.009674072265625, 0.0033550262451171875, -0.0010099411010742188, 0.0136260986328125, -0.006710052490234375, -0.039276123046875, 0.044464111328125, 0.0214996337890625, -0.050994873046875, -0.02740478515625, -0.038238525390625, ...
xsum_factuality
2023-01-25T15:03:16.000Z
[ "task_categories:summarization", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-xsum", "language:en", "license:cc-by-4.0", "hallucinations", "region:us" ]
null
Neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document. The popular metric such as ROUGE fails to show the severity of the problem. The dataset consists of faithfulness and factuality annotations of abstractive summaries for the XSum dataset. We have crowdsourced 3 judgements for each of 500 x 5 document-system pairs. This will be a valuable resource to the abstractive summarization community.
@InProceedings{maynez_acl20, author = "Joshua Maynez and Shashi Narayan and Bernd Bohnet and Ryan Thomas Mcdonald", title = "On Faithfulness and Factuality in Abstractive Summarization", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", year = "2020", pages = "1906--1919", address = "Online", }
4
128
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|other-xsum task_categories: - summarization task_ids: [] pretty_name: XSum Hallucination Annotations tags: - hallucinations dataset_info: - config_name: xsum_factuality features: - name: bbcid dtype: int32 - name: system dtype: string - name: summary dtype: string - name: is_factual dtype: class_label: names: '0': 'no' '1': 'yes' - name: worker_id dtype: string splits: - name: train num_bytes: 800027 num_examples: 5597 download_size: 2864759 dataset_size: 800027 - config_name: xsum_faithfulness features: - name: bbcid dtype: int32 - name: system dtype: string - name: summary dtype: string - name: hallucination_type dtype: class_label: names: '0': intrinsic '1': extrinsic - name: hallucinated_span_start dtype: int32 - name: hallucinated_span_end dtype: int32 - name: worker_id dtype: string splits: - name: train num_bytes: 1750325 num_examples: 11185 download_size: 2864759 dataset_size: 1750325 --- # Dataset Card for XSum Hallucination Annotations ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [XSUM Hallucination Annotations Homepage](https://research.google/tools/datasets/xsum-hallucination-annotations/) - **Repository:** [XSUM Hallucination Annotations Homepage](https://github.com/google-research-datasets/xsum_hallucination_annotations) - **Paper:** [ACL Web](https://www.aclweb.org/anthology/2020.acl-main.173.pdf) - **Point of Contact:** [xsum-hallucinations-acl20@google.com](mailto:xsum-hallucinations-acl20@google.com) ### Dataset Summary Neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document. The popular metric such as ROUGE fails to show the severity of the problem. This dataset contains a large scale human evaluation of several neural abstractive summarization systems to better understand the types of hallucinations they produce. The dataset consists of faithfulness and factuality annotations of abstractive summaries for the XSum dataset. The dataset has crowdsourced 3 judgements for each of 500 x 5 document-system pairs. This will be a valuable resource to the abstractive summarization community. ### Supported Tasks and Leaderboards * `summarization`: : The dataset can be used to train a model for Summarization,, which consists in summarizing a given document. Success on this task is typically measured by achieving a *high/low* [ROUGE Score](https://huggingface.co/metrics/rouge). ### Languages The text in the dataset is in English which are abstractive summaries for the [XSum dataset](https://www.aclweb.org/anthology/D18-1206.pdf). The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances ##### Faithfulness annotations dataset A typical data point consists of an ID referring to the news article(complete document), summary, and the hallucination span information. An example from the XSum Faithfulness dataset looks as follows: ``` { 'bbcid': 34687720, 'hallucinated_span_end': 114, 'hallucinated_span_start': 1, 'hallucination_type': 1, 'summary': 'rory mcilroy will take a one-shot lead into the final round of the wgc-hsbc champions after carding a three-under', 'system': 'BERTS2S', 'worker_id': 'wid_0' } ``` ##### Factuality annotations dataset A typical data point consists of an ID referring to the news article(complete document), summary, and whether the summary is factual or not. An example from the XSum Factuality dataset looks as follows: ``` { 'bbcid': 29911712, 'is_factual': 0, 'summary': 'more than 50 pupils at a bristol academy have been sent home from school because of a lack of uniform.', 'system': 'BERTS2S', 'worker_id': 'wid_0' } ``` ### Data Fields ##### Faithfulness annotations dataset Raters are shown the news article and the system summary, and are tasked with identifying and annotating the spans that aren't supported by the input article. The file contains the following columns: - `bbcid`: Document id in the XSum corpus. - `system`: Name of neural summarizer. - `summary`: Summary generated by ‘system’. - `hallucination_type`: Type of hallucination: intrinsic (0) or extrinsic (1) - `hallucinated_span`: Hallucinated span in the ‘summary’. - `hallucinated_span_start`: Index of the start of the hallucinated span. - `hallucinated_span_end`: Index of the end of the hallucinated span. - `worker_id`: Worker ID (one of 'wid_0', 'wid_1', 'wid_2') The `hallucination_type` column has NULL value for some entries which have been replaced iwth `-1`. ##### Factuality annotations dataset Raters are shown the news article and the hallucinated system summary, and are tasked with assessing the summary whether it is factual or not. The file contains the following columns: - `bbcid1: Document id in the XSum corpus. - `system`: Name of neural summarizer. - `summary`: Summary generated by ‘system’. - `is_factual`: Yes (1) or No (0) - `worker_id`: Worker ID (one of 'wid_0', 'wid_1', 'wid_2') The `is_factual` column has NULL value for some entries which have been replaced iwth `-1`. ### Data Splits There is only a single split for both the Faithfulness annotations dataset and Factuality annotations dataset. | | train | |--------------------------|------:| | Faithfulness annotations | 11185 | | Factuality annotations | 5597 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @InProceedings{maynez_acl20, author = "Joshua Maynez and Shashi Narayan and Bernd Bohnet and Ryan Thomas Mcdonald", title = "On Faithfulness and Factuality in Abstractive Summarization", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", year = "2020", pages = "1906--1919", address = "Online", } ``` ### Contributions Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset.
8,049
[ [ -0.032562255859375, -0.029998779296875, 0.0225067138671875, 0.01105499267578125, -0.016693115234375, -0.0035190582275390625, -0.022003173828125, -0.033966064453125, 0.06353759765625, 0.037811279296875, -0.047271728515625, -0.06292724609375, -0.0516357421875, ...
FRTNX/cosuju
2021-03-29T09:01:41.000Z
[ "region:us" ]
FRTNX
Court Summaries and Judgements (CoSuJu) Dataset
@InProceedings{huggingface:dataset, title = {CoSuJu 500+ Court Judegements and Summaries for Machine Text Summarization}, authors = {Busani Ndlovu, Luke Jordan}, year = {2021} }
0
128
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
SocialGrep/reddit-nonewnormal-complete
2022-07-01T19:02:06.000Z
[ "annotations_creators:lexyr", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
SocialGrep
This corpus contains the complete data for the activity on subreddit /r/NoNewNormal for the entire duration of its existence.
null
1
128
2022-03-02T23:29:22
--- annotations_creators: - lexyr language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original paperswithcode_id: null --- # Dataset Card for reddit-nonewnormal-complete ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=nonewnormal) - **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=nonewnormal) ### Dataset Summary This corpus contains the complete data for the activity on subreddit /r/NoNewNormal for the entire duration of its existence. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'subreddit.name': the human-readable name of the data point's host subreddit. - 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information CC-BY v4.0 ### Contributions [Needs More Information]
3,773
[ [ -0.04351806640625, -0.058135986328125, 0.0221405029296875, 0.03125, -0.0310516357421875, 0.00567626953125, -0.0243988037109375, -0.024749755859375, 0.06298828125, 0.03350830078125, -0.07403564453125, -0.081787109375, -0.050994873046875, 0.0247344970703125, ...
albertvillanova/legal_contracts
2021-12-10T18:03:23.000Z
[ "region:us" ]
albertvillanova
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
@InProceedings{huggingface:dataset, title = {A great new dataset}, author={huggingface, Inc. }, year={2020} }
17
128
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
ywchoi/pubmed_abstract_8
2022-09-13T01:14:30.000Z
[ "region:us" ]
ywchoi
null
null
0
128
2022-09-13T01:13:02
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
AmazonScience/mintaka
2022-10-28T10:55:50.000Z
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:ar", "multilinguality:de", "multilinguality:ja", "multilinguality:hi", "multilinguality:pt", "multilinguality:en", "multilinguality:es", "multil...
AmazonScience
Mintaka is a complex, natural, and multilingual dataset designed for experimenting with end-to-end question-answering models. Mintaka is composed of 20,000 question-answer pairs collected in English, annotated with Wikidata entities, and translated into Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish for a total of 180,000 samples. Mintaka includes 8 types of complex questions, including superlative, intersection, and multi-hop questions, which were naturally elicited from crowd workers.
@inproceedings{sen-etal-2022-mintaka, title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering", author = "Sen, Priyanka and Aji, Alham Fikri and Saffari, Amir", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.138", pages = "1604--1619" }
5
128
2022-10-27T18:38:30
--- annotations_creators: - expert-generated language_creators: - found license: - cc-by-4.0 multilinguality: - ar - de - ja - hi - pt - en - es - it - fr size_categories: - 100K<n<1M source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: mintaka pretty_name: Mintaka language_bcp47: - ar-SA - de-DE - ja-JP - hi-HI - pt-PT - en-EN - es-ES - it-IT - fr-FR --- # Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/amazon-science/mintaka - **Repository:** https://github.com/amazon-science/mintaka - **Paper:** https://aclanthology.org/2022.coling-1.138/ - **Point of Contact:** [GitHub](https://github.com/amazon-science/mintaka) ### Dataset Summary Mintaka is a complex, natural, and multilingual question answering (QA) dataset composed of 20,000 question-answer pairs elicited from MTurk workers and annotated with Wikidata question and answer entities. Full details on the Mintaka dataset can be found in our paper: https://aclanthology.org/2022.coling-1.138/ To build Mintaka, we explicitly collected questions in 8 complexity types, as well as generic questions: - Count (e.g., Q: How many astronauts have been elected to Congress? A: 4) - Comparative (e.g., Q: Is Mont Blanc taller than Mount Rainier? A: Yes) - Superlative (e.g., Q: Who was the youngest tribute in the Hunger Games? A: Rue) - Ordinal (e.g., Q: Who was the last Ptolemaic ruler of Egypt? A: Cleopatra) - Multi-hop (e.g., Q: Who was the quarterback of the team that won Super Bowl 50? A: Peyton Manning) - Intersection (e.g., Q: Which movie was directed by Denis Villeneuve and stars Timothee Chalamet? A: Dune) - Difference (e.g., Q: Which Mario Kart game did Yoshi not appear in? A: Mario Kart Live: Home Circuit) - Yes/No (e.g., Q: Has Lady Gaga ever made a song with Ariana Grande? A: Yes.) - Generic (e.g., Q: Where was Michael Phelps born? A: Baltimore, Maryland) - We collected questions about 8 categories: Movies, Music, Sports, Books, Geography, Politics, Video Games, and History Mintaka is one of the first large-scale complex, natural, and multilingual datasets that can be used for end-to-end question-answering models. ### Supported Tasks and Leaderboards The dataset can be used to train a model for question answering. To ensure comparability, please refer to our evaluation script here: https://github.com/amazon-science/mintaka#evaluation ### Languages All questions were written in English and translated into 8 additional languages: Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ```json { "id": "a9011ddf", "lang": "en", "question": "What is the seventh tallest mountain in North America?", "answerText": "Mount Lucania", "category": "geography", "complexityType": "ordinal", "questionEntity": [ { "name": "Q49", "entityType": "entity", "label": "North America", "mention": "North America", "span": [40, 53] }, { "name": 7, "entityType": "ordinal", "mention": "seventh", "span": [12, 19] } ], "answerEntity": [ { "name": "Q1153188", "label": "Mount Lucania", } ], } ``` ### Data Fields The data fields are the same among all splits. `id`: a unique ID for the given sample. `lang`: the language of the question. `question`: the original question elicited in the corresponding language. `answerText`: the original answer text elicited in English. `category`: the category of the question. Options are: geography, movies, history, books, politics, music, videogames, or sports `complexityType`: the complexity type of the question. Options are: ordinal, intersection, count, superlative, yesno comparative, multihop, difference, or generic `questionEntity`: a list of annotated question entities identified by crowd workers. ``` { "name": The Wikidata Q-code or numerical value of the entity "entityType": The type of the entity. Options are: entity, cardinal, ordinal, date, time, percent, quantity, or money "label": The label of the Wikidata Q-code "mention": The entity as it appears in the English question text. Will be empty for non-English samples. "span": The start and end characters of the mention in the English question text. Will be empty for non-English samples. } ``` `answerEntity`: a list of annotated answer entities identified by crowd workers. ``` { "name": The Wikidata Q-code or numerical value of the entity "label": The label of the Wikidata Q-code } ``` ### Data Splits For each language, we split into train (14,000 samples), dev (2,000 samples), and test (4,000 samples) sets. ### Personal and Sensitive Information The corpora is free of personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators Amazon Alexa AI. ### Licensing Information This project is licensed under the CC-BY-4.0 License. ### Citation Information Please cite the following papers when using this dataset. ```latex @inproceedings{sen-etal-2022-mintaka, title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering", author = "Sen, Priyanka and Aji, Alham Fikri and Saffari, Amir", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.138", pages = "1604--1619" } ``` ### Contributions Thanks to [@afaji](https://github.com/afaji) for adding this dataset.
7,470
[ [ -0.06158447265625, -0.09228515625, 0.023345947265625, 0.003265380859375, -0.01611328125, 0.0143890380859375, -0.01239776611328125, -0.021331787109375, 0.045745849609375, 0.0257110595703125, -0.057098388671875, -0.0229034423828125, -0.0233306884765625, 0.0327...
pszemraj/scientific_lay_summarisation-plos-norm
2023-06-20T01:06:39.000Z
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:10K<n<100K", "source_datasets:tomasg25/scientific_lay_summarisation", "language:en", "license:mit", "arxiv:2210.09932", "region:us" ]
pszemraj
null
null
3
128
2023-03-29T16:24:26
--- license: mit task_categories: - summarization - text2text-generation language: - en size_categories: - 10K<n<100K source_datasets: tomasg25/scientific_lay_summarisation --- # scientific_lay_summarisation - PLOS - normalized This dataset is a modified version of [tomasg25/scientific_lay_summarization](https://huggingface.co/datasets/tomasg25/scientific_lay_summarisation) and contains scientific lay summaries that have been preprocessed [with this code](https://gist.github.com/pszemraj/bd344637af7c0c10ecf4ab62c4d0ce91). The preprocessing includes fixing punctuation and whitespace problems, and calculating the token length of each text sample using a tokenizer from the T5 model. Original dataset details: - **Repository:** https://github.com/TGoldsack1/Corpora_for_Lay_Summarisation - **Paper:** [Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature](https://arxiv.org/abs/2210.09932) ## Data Cleaning The text in both the "article" and "summary" columns was processed to ensure that punctuation and whitespace were consistent. The `fix_punct_whitespace` function was applied to each text sample to: - Remove spaces before punctuation marks (except for parentheses) - Add a space after punctuation marks (except for parentheses) if missing - Handle spaces around parentheses - Add a space after a closing parenthesis if followed by a word or opening parenthesis - Handle spaces around quotation marks - Handle spaces around single quotes - Handle comma in numbers ## Tokenization The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The `calculate_token_length` function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes. ## Data Format The resulting processed data files are stored in Apache parquet and can be loaded using the `pandas' library or the `datasets' library from the Hugging Face transformers package. The relevant column names and data types for summarization are ```python DatasetDict({ train: Dataset({ features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'], num_rows: 24773 }) test: Dataset({ features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'], num_rows: 1376 }) validation: Dataset({ features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'], num_rows: 1376 }) }) ``` ## Usage Load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`: ```python # download the dataset files by clicking on 'use in datasets' and cloning import pandas as pd # Load train set df = pd.read_parquet("scientific_lay_summarisation-plos-norm/train.parquet") print(df.info()) ``` And here is an example using `datasets`: ```python from datasets import load_dataset dataset = load_dataset("pszemraj/scientific_lay_summarisation-plos-norm") train_set = dataset['train'] # Print the first few samples for i in range(5): print(train_set[i]) ``` ## Token Lengths For train split: ![train-lengths](https://i.imgur.com/EXfC9kz.png) ---
3,349
[ [ -0.015411376953125, -0.034332275390625, 0.005992889404296875, 0.043975830078125, -0.03912353515625, -0.01218414306640625, -0.0231781005859375, 0.00270843505859375, 0.046112060546875, 0.039703369140625, -0.018157958984375, -0.0548095703125, -0.037017822265625, ...
diffusers/dog-example
2023-04-18T15:53:56.000Z
[ "region:us" ]
diffusers
null
null
2
128
2023-04-18T15:53:06
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
sezer12138/ADE20k_Segementation
2023-07-21T03:06:25.000Z
[ "region:us" ]
sezer12138
null
null
0
128
2023-07-19T13:18:55
--- dataset_info: features: - name: image dtype: image - name: annotated dtype: image - name: Scene_category dtype: class_label: names: '0': abbey '1': access_road '2': acropolis '3': air_base '4': aircraft_carrier_object '5': airfield '6': airlock '7': airplane '8': airplane_cabin '9': airport '10': airport_terminal '11': airport_ticket_counter '12': alcove '13': alley '14': amphitheater '15': amphitheater_indoor '16': amusement_arcade '17': amusement_park '18': anechoic_chamber '19': apartment_building_outdoor '20': apse_indoor '21': apse_outdoor '22': aquarium '23': aquatic_theater '24': aqueduct '25': arbor '26': arcade '27': arch '28': archaelogical_excavation '29': archipelago '30': archive '31': armory '32': army_base '33': arrival_gate_indoor '34': arrival_gate_outdoor '35': art_gallery '36': art_school '37': art_studio '38': artificial '39': artists_loft '40': assembly_hall '41': assembly_line '42': assembly_plant '43': athletic_field_indoor '44': athletic_field_outdoor '45': atrium_home '46': atrium_public '47': attic '48': auditorium '49': auto_factory '50': auto_mechanics_indoor '51': auto_mechanics_outdoor '52': auto_racing_paddock '53': auto_showroom '54': awning_deck '55': back_porch '56': backdrop '57': backroom '58': backseat '59': backstage '60': backstage_outdoor '61': backstairs '62': backstairs_indoor '63': backwoods '64': badlands '65': badminton_court_indoor '66': badminton_court_outdoor '67': baggage_claim '68': balcony_interior '69': ball_pit '70': ballet '71': ballroom '72': balustrade '73': bamboo_forest '74': bank_indoor '75': bank_outdoor '76': bank_vault '77': banquet_hall '78': baptistry_indoor '79': baptistry_outdoor '80': bar '81': barbeque '82': barbershop '83': barn '84': barndoor '85': barnyard '86': barrack '87': barrel_storage '88': baseball '89': baseball_field '90': basement '91': basilica '92': basin_outdoor '93': basketball '94': basketball_court_indoor '95': basketball_court_outdoor '96': bath_indoor '97': bath_outdoor '98': bathhouse '99': bathhouse_outdoor '100': bathroom '101': batters_box '102': batting_cage_indoor '103': batting_cage_outdoor '104': battlefield '105': battlement '106': bay '107': bayou '108': bazaar_indoor '109': bazaar_outdoor '110': beach '111': beach_house '112': beauty_salon '113': bedchamber '114': bedroom '115': beer_garden '116': beer_hall '117': belfry '118': bell_foundry '119': berth '120': berth_deck '121': betting_shop '122': bicycle_racks '123': bindery '124': biology_laboratory '125': bistro_indoor '126': bistro_outdoor '127': bleachers_indoor '128': bleachers_outdoor '129': block '130': boardwalk '131': boat '132': boat_deck '133': boathouse '134': bog '135': bomb_shelter_indoor '136': bookbindery '137': bookshelf '138': bookstore '139': booth '140': booth_indoor '141': booth_outdoor '142': botanical_garden '143': bottle_storage '144': bottomland '145': bow_window_indoor '146': bow_window_outdoor '147': bowling_alley '148': box_seat '149': boxing_ring '150': breakfast_table '151': breakroom '152': brewery_indoor '153': brewery_outdoor '154': bric-a-brac '155': brickyard_indoor '156': brickyard_outdoor '157': bridge '158': bridle_path '159': broadleaf '160': brooklet '161': bubble_chamber '162': buffet '163': building_complex '164': building_facade '165': bulkhead '166': bullpen '167': bullring '168': bunk_bed '169': burial_chamber '170': bus_depot_indoor '171': bus_depot_outdoor '172': bus_interior '173': bus_shelter '174': bus_station_indoor '175': bus_station_outdoor '176': butchers_shop '177': butte '178': bypass '179': byroad '180': cabana '181': cabin_cruiser '182': cabin_indoor '183': cabin_outdoor '184': cafeteria '185': call_center '186': campsite '187': campus '188': candy_store '189': canteen '190': canyon '191': car_dealership '192': caravansary '193': cardroom '194': cargo_container_interior '195': cargo_deck '196': cargo_helicopter '197': carport_indoor '198': carport_outdoor '199': carrousel '200': cascade '201': casino_indoor '202': casino_outdoor '203': castle '204': catacomb '205': cataract '206': cathedral_indoor '207': cathedral_outdoor '208': catwalk '209': cavern_indoor '210': cavern_outdoor '211': cellar '212': cemetery '213': chair_lift '214': chalet '215': chaparral '216': chapel '217': checkout_counter '218': cheese_factory '219': chemical_plant '220': chemistry_lab '221': chicken_coop_indoor '222': chicken_coop_outdoor '223': chicken_farm_indoor '224': chicken_farm_outdoor '225': childs_room '226': choir_loft_interior '227': chuck_wagon '228': church_indoor '229': church_outdoor '230': circus_tent_indoor '231': circus_tent_outdoor '232': city '233': classroom '234': clean_room '235': cliff '236': clock_tower_indoor '237': cloister_indoor '238': cloister_outdoor '239': closet '240': clothing_store '241': coast '242': coast_road '243': cockpit '244': cocktail_lounge '245': coffee_shop '246': computer_room '247': conference_center '248': conference_hall '249': conference_room '250': confessional '251': construction_site '252': control_room '253': control_tower_indoor '254': control_tower_outdoor '255': convenience_store_indoor '256': convenience_store_outdoor '257': coral_reef '258': corn_field '259': corner '260': corral '261': corridor '262': cottage '263': cottage_garden '264': country_house '265': country_road '266': courthouse '267': courtroom '268': courtyard '269': covered_bridge_interior '270': crawl_space '271': creek '272': crevasse '273': crosswalk '274': cultivated '275': customhouse '276': cybercafe '277': dacha '278': dairy_indoor '279': dairy_outdoor '280': dam '281': dance_floor '282': dance_school '283': darkroom '284': day_care_center '285': deck-house_boat_deck_house '286': deck-house_deck_house '287': delicatessen '288': dentists_office '289': department_store '290': departure_lounge '291': desert_road '292': diner_indoor '293': diner_outdoor '294': dinette_home '295': dining_area '296': dining_car '297': dining_hall '298': dining_room '299': dirt_track '300': discotheque '301': distillery '302': ditch '303': diving_board '304': dock '305': dolmen '306': donjon '307': door '308': doorway_indoor '309': doorway_outdoor '310': dorm_room '311': downtown '312': drainage_ditch '313': dress_shop '314': dressing_room '315': drill_rig '316': driveway '317': driving_range_indoor '318': driving_range_outdoor '319': drugstore '320': dry '321': dry_dock '322': dugout '323': earth_fissure '324': east_asia '325': editing_room '326': electrical_substation '327': elevated_catwalk '328': elevator_interior '329': elevator_lobby '330': elevator_shaft '331': embankment '332': embassy '333': embrasure '334': engine_room '335': entrance '336': entrance_hall '337': entranceway_indoor '338': entranceway_outdoor '339': entryway_outdoor '340': escalator_indoor '341': escalator_outdoor '342': escarpment '343': establishment '344': estaminet '345': estuary '346': excavation '347': exhibition_hall '348': exterior '349': fabric_store '350': factory_indoor '351': factory_outdoor '352': fairway '353': fan '354': farm '355': farm_building '356': farmhouse '357': fastfood_restaurant '358': feed_bunk '359': fence '360': ferryboat_indoor '361': field_house '362': field_road '363': field_tent_indoor '364': field_tent_outdoor '365': fire_escape '366': fire_station '367': fire_trench '368': fireplace '369': firing_range_indoor '370': firing_range_outdoor '371': fish_farm '372': fishmarket '373': fishpond '374': fitting_room_interior '375': fjord '376': flashflood '377': flatlet '378': flea_market_indoor '379': flea_market_outdoor '380': floating_dock '381': floating_dry_dock '382': flood '383': flood_plain '384': florist_shop_indoor '385': florist_shop_outdoor '386': flowerbed '387': flume_indoor '388': fly_bridge '389': flying_buttress '390': food_court '391': football '392': football_field '393': foothill '394': forecourt '395': foreshore '396': forest_fire '397': forest_path '398': forest_road '399': forklift '400': formal_garden '401': fort '402': fortress '403': foundry_indoor '404': foundry_outdoor '405': fountain '406': freestanding '407': freeway '408': freight_elevator '409': front_porch '410': frontseat '411': funeral_chapel '412': funeral_home '413': furnace_room '414': galley '415': game_room '416': gangplank '417': garage_indoor '418': garage_outdoor '419': garbage_dump '420': garden '421': gas_station '422': gas_well '423': gasworks '424': gate '425': gatehouse '426': gazebo_interior '427': general_store_indoor '428': general_store_outdoor '429': geodesic_dome_indoor '430': geodesic_dome_outdoor '431': ghost_town '432': gift_shop '433': glacier '434': glade '435': glen '436': golf_course '437': gorge '438': granary '439': grape_arbor '440': great_hall '441': greengrocery '442': greenhouse_indoor '443': greenhouse_outdoor '444': grotto '445': grove '446': guardhouse '447': guardroom '448': guesthouse '449': gulch '450': gun_deck_indoor '451': gun_deck_outdoor '452': gun_store '453': gymnasium_indoor '454': gymnasium_outdoor '455': hacienda '456': hallway '457': handball_court '458': hangar_indoor '459': hangar_outdoor '460': harbor '461': hardware_store '462': hat_shop '463': hatchery '464': hayfield '465': hayloft '466': head_shop '467': hearth '468': heath '469': hedge_maze '470': hedgerow '471': heliport '472': hen_yard '473': herb_garden '474': highway '475': hill '476': hillock '477': hockey '478': hollow '479': home_office '480': home_theater '481': hoodoo '482': hospital '483': hospital_room '484': hot_spring '485': hot_tub_indoor '486': hot_tub_outdoor '487': hotel_breakfast_area '488': hotel_outdoor '489': hotel_room '490': house '491': housing_estate '492': housing_project '493': howdah '494': hunting_lodge_indoor '495': hunting_lodge_outdoor '496': hut '497': hutment '498': ice_cream_parlor '499': ice_floe '500': ice_shelf '501': ice_skating_rink_indoor '502': ice_skating_rink_outdoor '503': iceberg '504': igloo '505': imaret '506': incinerator_indoor '507': incinerator_outdoor '508': indoor_procenium '509': indoor_round '510': indoor_seats '511': industrial_area '512': industrial_park '513': inlet '514': inn_indoor '515': inn_outdoor '516': insane_asylum '517': irrigation_ditch '518': islet '519': jacuzzi_indoor '520': jacuzzi_outdoor '521': jail_cell '522': jail_indoor '523': jail_outdoor '524': japanese_garden '525': jetty '526': jewelry_shop '527': joss_house '528': juke_joint '529': jungle '530': junk_pile '531': junkyard '532': jury_box '533': kasbah '534': kennel_indoor '535': kennel_outdoor '536': kindergarden_classroom '537': kiosk_indoor '538': kiosk_outdoor '539': kitchen '540': kitchenette '541': kraal '542': lab_classroom '543': laboratorywet '544': labyrinth_indoor '545': labyrinth_outdoor '546': lagoon '547': landfill '548': landing '549': landing_deck '550': landing_strip '551': laundromat '552': lava_flow '553': lavatory '554': lawn '555': layby '556': lean-to '557': lean-to_tent '558': lecture_room '559': legislative_chamber '560': levee '561': library '562': library_indoor '563': library_outdoor '564': lido_deck_indoor '565': lido_deck_outdoor '566': lift_bridge '567': lighthouse '568': limousine_interior '569': liquor_store_indoor '570': liquor_store_outdoor '571': living_room '572': loading_dock '573': lobby '574': lock_chamber '575': locker_room '576': loft '577': loge '578': loggia_outdoor '579': lookout_station_indoor '580': lookout_station_outdoor '581': lower_deck '582': luggage_van '583': lumberyard_indoor '584': lumberyard_outdoor '585': lyceum '586': machine_shop '587': manhole '588': mansard '589': mansion '590': manufactured_home '591': market_indoor '592': market_outdoor '593': marsh '594': martial_arts_gym '595': massage_room '596': mastaba '597': maternity_ward '598': mausoleum '599': meadow '600': meat_house '601': medina '602': megalith '603': menhir '604': mens_store_outdoor '605': mental_institution_indoor '606': mental_institution_outdoor '607': mesa '608': mesoamerican '609': mess_hall '610': mews '611': mezzanine '612': military_headquarters '613': military_hospital '614': military_hut '615': military_tent '616': millpond '617': millrace '618': mine '619': mineral_bath '620': mineshaft '621': mini_golf_course_indoor '622': mini_golf_course_outdoor '623': misc '624': mission '625': mobile_home '626': monastery_indoor '627': monastery_outdoor '628': moon_bounce '629': moor '630': morgue '631': mosque_indoor '632': mosque_outdoor '633': motel '634': mountain '635': mountain_path '636': mountain_road '637': mountain_snowy '638': movie_theater_indoor '639': movie_theater_outdoor '640': mudflat '641': museum_indoor '642': museum_outdoor '643': music_store '644': music_studio '645': natural '646': natural_history_museum '647': natural_spring '648': naval_base '649': needleleaf '650': newsroom '651': newsstand_indoor '652': newsstand_outdoor '653': nightclub '654': nook '655': nuclear_power_plant_indoor '656': nuclear_power_plant_outdoor '657': nunnery '658': nursery '659': nursing_home '660': nursing_home_outdoor '661': oasis '662': oast_house '663': observation_station '664': observatory_indoor '665': observatory_outdoor '666': observatory_post '667': ocean '668': ocean_deep '669': ocean_shallow '670': office '671': office_building '672': office_cubicles '673': oil_refinery_indoor '674': oil_refinery_outdoor '675': oilrig '676': one-way_street '677': open-hearth_furnace '678': operating_room '679': operating_table '680': optician '681': orchard '682': orchestra_pit '683': organ_loft_interior '684': orlop_deck '685': ossuary '686': outbuilding '687': outcropping '688': outhouse_indoor '689': outhouse_outdoor '690': outside '691': overpass '692': oyster_bar '693': oyster_farm '694': packaging_plant '695': pagoda '696': palace '697': palace_hall '698': palestra '699': pantry '700': paper_mill '701': parade_ground '702': park '703': parking_garage_indoor '704': parking_garage_outdoor '705': parking_lot '706': parkway '707': parlor '708': particle_accelerator '709': party_tent_indoor '710': party_tent_outdoor '711': passenger_deck '712': pasture '713': patio '714': patio_indoor '715': pavement '716': pavilion '717': pawnshop '718': pawnshop_outdoor '719': pedestrian_overpass_indoor '720': penalty_box '721': performance '722': perfume_shop '723': pet_shop '724': pharmacy '725': phone_booth '726': physics_laboratory '727': piano_store '728': picnic_area '729': pier '730': pig_farm '731': pilothouse_indoor '732': pilothouse_outdoor '733': pinetum '734': piste_road '735': pitchers_mound '736': pizzeria '737': pizzeria_outdoor '738': planetarium_indoor '739': planetarium_outdoor '740': plantation_house '741': platform '742': playground '743': playroom '744': plaza '745': plunge '746': podium_indoor '747': podium_outdoor '748': police_station '749': pond '750': pontoon_bridge '751': poolroom_home '752': poop_deck '753': porch '754': portico '755': portrait_studio '756': postern '757': powder_room '758': power_plant_outdoor '759': preserve '760': print_shop '761': priory '762': promenade '763': promenade_deck '764': pub_indoor '765': pub_outdoor '766': pueblo '767': pulpit '768': pump_room '769': pumping_station '770': putting_green '771': quadrangle '772': questionable '773': quicksand '774': quonset_hut_indoor '775': quonset_hut_outdoor '776': racecourse '777': raceway '778': raft '779': rail_indoor '780': rail_outdoor '781': railroad_track '782': railway_yard '783': rainforest '784': ramp '785': ranch '786': ranch_house '787': reading_room '788': reception '789': reception_room '790': recreation_room '791': rectory '792': recycling_plant_indoor '793': recycling_plant_outdoor '794': refectory '795': repair_shop '796': residential_neighborhood '797': resort '798': rest_area '799': rest_stop '800': restaurant '801': restaurant_kitchen '802': restaurant_patio '803': restroom_indoor '804': restroom_outdoor '805': retaining_wall '806': revolving_door '807': rice_paddy '808': riding_arena '809': rift_valley '810': river '811': road '812': road_cut '813': road_indoor '814': road_outdoor '815': rock_arch '816': rock_garden '817': rodeo '818': roller_skating_rink_indoor '819': roller_skating_rink_outdoor '820': rolling_mill '821': roof '822': roof_garden '823': room '824': root_cellar '825': rope_bridge '826': rotisserie '827': roundabout '828': roundhouse '829': rubble '830': ruin '831': runway '832': sacristy '833': safari_park '834': salon '835': saloon '836': salt_plain '837': sanatorium '838': sand '839': sand_trap '840': sandbar '841': sandbox '842': sauna '843': savanna '844': sawmill '845': schoolhouse '846': schoolyard '847': science_laboratory '848': science_museum '849': scriptorium '850': scrubland '851': scullery '852': sea_cliff '853': seaside '854': seawall '855': security_check_point '856': semidesert '857': server_room '858': sewer '859': sewing_room '860': shed '861': shelter '862': shelter_deck '863': shelter_tent '864': shipping_room '865': shipyard_outdoor '866': shoe_shop '867': shop '868': shopfront '869': shopping_mall_indoor '870': shopping_mall_outdoor '871': shore '872': shower '873': shower_room '874': shrine '875': shrubbery '876': sidewalk '877': signal_box '878': sinkhole '879': ski_jump '880': ski_lodge '881': ski_resort '882': ski_slope '883': sky '884': skyscraper '885': skywalk_indoor '886': skywalk_outdoor '887': slum '888': snack_bar '889': snowbank '890': snowfield '891': soccer '892': south_asia '893': spillway '894': sporting_goods_store '895': squash_court '896': stable '897': stadium_outdoor '898': stage_indoor '899': stage_outdoor '900': stage_set '901': staircase '902': stall '903': starting_gate '904': stateroom '905': station '906': steam_plant_outdoor '907': steel_mill_indoor '908': steel_mill_outdoor '909': stone_circle '910': storage_room '911': store '912': storm_cellar '913': street '914': streetcar_track '915': strip_mall '916': strip_mine '917': student_center '918': student_residence '919': study_hall '920': submarine_interior '921': subway_interior '922': sugar_refinery '923': sun_deck '924': sunroom '925': supermarket '926': supply_chamber '927': sushi_bar '928': swamp '929': swimming_hole '930': swimming_pool_indoor '931': swimming_pool_outdoor '932': synagogue_indoor '933': synagogue_outdoor '934': t-bar_lift '935': tannery '936': taxistand '937': taxiway '938': tea_garden '939': teahouse '940': tearoom '941': teashop '942': television_room '943': television_studio '944': tennis_court_indoor '945': tennis_court_outdoor '946': tent_outdoor '947': terrace_farm '948': theater_outdoor '949': threshing_floor '950': thriftshop '951': throne_room '952': ticket_booth '953': ticket_window_indoor '954': tidal_basin '955': tidal_river '956': tiltyard '957': tobacco_shop_indoor '958': toll_plaza '959': tollbooth '960': tollgate '961': tomb '962': topiary_garden '963': tower '964': town_house '965': toyshop '966': track_outdoor '967': tract_housing '968': trading_floor '969': traffic_island '970': trailer_park '971': train_interior '972': train_railway '973': train_station_outdoor '974': tree_farm '975': tree_house '976': trellis '977': trench '978': trestle_bridge '979': truck_stop '980': tundra '981': turkish_bath '982': upper_balcony '983': urban '984': utility_room '985': valley '986': van_interior '987': vat '988': vegetable_garden '989': vegetation '990': vehicle '991': velodrome_indoor '992': velodrome_outdoor '993': ventilation_shaft '994': veranda '995': vestibule '996': vestry '997': veterinarians_office '998': viaduct '999': videostore '1000': village '1001': vinery '1002': vineyard '1003': volcano '1004': volleyball_court_indoor '1005': volleyball_court_outdoor '1006': voting_booth '1007': waiting_room '1008': walk_in_freezer '1009': walkway '1010': war_room '1011': warehouse_indoor '1012': warehouse_outdoor '1013': washhouse_indoor '1014': washhouse_outdoor '1015': washroom '1016': watchtower '1017': water '1018': water_fountain '1019': water_gate '1020': water_mill '1021': water_park '1022': water_tower '1023': water_treatment_plant_indoor '1024': water_treatment_plant_outdoor '1025': watering_hole '1026': waterscape '1027': waterway '1028': wave '1029': weighbridge '1030': western '1031': wet_bar '1032': wetland '1033': wharf '1034': wheat_field '1035': whispering_gallery '1036': widows_walk_indoor '1037': widows_walk_interior '1038': wild '1039': wind_farm '1040': windmill '1041': window_seat '1042': windstorm '1043': winery '1044': witness_stand '1045': woodland '1046': workroom '1047': workshop '1048': wrestling_ring_indoor '1049': wrestling_ring_outdoor '1050': yard '1051': youth_hostel '1052': zen_garden '1053': ziggurat '1054': zoo splits: - name: train num_bytes: 1097055005.51 num_examples: 20210 - name: val num_bytes: 90418264.0 num_examples: 2000 download_size: 966605341 dataset_size: 1187473269.51 --- # Dataset Card for "ADE20k_Segementation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
31,576
[ [ -0.054443359375, -0.01739501953125, 0.01371002197265625, 0.025360107421875, -0.00460052490234375, -0.007495880126953125, 0.02593994140625, -0.0183258056640625, 0.057525634765625, 0.041961669921875, -0.0697021484375, -0.055145263671875, -0.032135009765625, -0...
OfekGlick/DiscoEval
2023-10-25T13:19:20.000Z
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:en", "license:bsd", "Discourse", "Discourse Evaluation", "NLP", "arxiv:1909.00142", "region:us" ]
OfekGlick
This dataset contains all tasks of the DiscoEval benchmark for sentence representation learning.
@InProceedings{mchen-discoeval-19, title = {Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations}, author = {Mingda Chen and Zewei Chu and Kevin Gimpel}, booktitle = {Proc. of {EMNLP}}, year={2019} }
0
128
2023-09-22T23:22:52
--- license: bsd task_categories: - text-classification language: - en tags: - Discourse - Discourse Evaluation - NLP pretty_name: DiscoEval size_categories: - 100K<n<1M --- # DiscoEval Benchmark Datasets ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Sources](#dataset-sources) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Benchmark Creators](#benchmark-creators) - [Citation Information](#citation-information) - [Loading Data Examples](#loading-data-examples) - [Loading Data for Sentence Positioning Task with the Arxiv data source](#loading-data-for-sentence-positioning-task-with-the-arxiv-data-source) ## Dataset Description - **Repository:** [DiscoEval repository](https://github.com/ZeweiChu/DiscoEval) - **Paper:** [Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations](https://arxiv.org/pdf/1909.00142) ### Dataset Summary The DiscoEval is an English-language Benchmark that contains a test suite of 7 tasks to evaluate whether sentence representations include semantic information relevant to discourse processing. The benchmark datasets offer a collection of tasks designed to evaluate natural language understanding models in the context of discourse analysis and coherence. ### Dataset Sources - **Arxiv**: A repository of scientific papers and research articles. - **Wikipedia**: An extensive online encyclopedia with articles on diverse topics. - **Rocstory**: A dataset consisting of fictional stories. - **Ubuntu IRC channel**: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel. - **PeerRead**: A dataset of scientific papers frequently used for discourse-related tasks. - **RST Discourse Treebank**: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations. - **Penn Discourse Treebank**: Another dataset with annotated discourse relations, facilitating the study of discourse structure. ### Supported Tasks 1. **Sentence Positioning** - **Datasets Sources**: Arxiv, Wikipedia, Rocstory - **Description**: Determine the correct placement of a sentence within a given context of five sentences. To form the input when training classifiers encode the five sentences to vector representations \\(x_i\\). As input to the classfier we include \\(x_1\\) and the contcatination of \\(x_1 - x_i\\) for all \\(i\\): \\([x_1, x_1 - x_2, x_1-x_3,x_1-x_4,x_1-x_5]\\) 2. **Binary Sentence Ordering** - **Datasets Sources**: Arxiv, Wikipedia, Rocstory - **Description**: Determining whether two sentences are in the correct consecutive order, identifying the more coherent structure. To form the input when training classifiers, we concatenate the embeddings of both sentences with their element-wise difference: \\([x_1, x_2, x_1-x_2]\\) 3. **Discourse Coherence** - **Datasets Sources**: Ubuntu IRC channel, Wikipedia - **Description**: Determine whether a sequence of six sentences form a coherent paragraph. To form the input when training classifiers, encode all sentences to vector representations and concatenate all of them: \\([x_1, x_2, x_3, x_4, x_5, x_6]\\) 4. **Sentence Section Prediction** - **Datasets Sources**: Constructed from PeerRead - **Description**: Determine the section or category to which a sentence belongs within a scientific paper, based on the content and context. To form the input when training classifiers, simply input the sentence embedding. 5. **Discourse Relations** - **Datasets Sources**: RST Discourse Treebank, Penn Discourse Treebank - **Description**: Identify and classify discourse relations between sentences or text segments, helping to reveal the structure and flow of discourse. To form the input when training classifiers, refer to the [original paper](https://arxiv.org/pdf/1909.00142) for instructions ### Languages The text in all datasets is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances All tasks are classification tasks, and they differ by the number of sentences per example and the type of label. An example from the Sentence Positioning task would look as follows: ``` {'sentence_1': 'Dan was overweight as well.', 'sentence_2': 'Dan's parents were overweight.', 'sentence_3': 'The doctors told his parents it was unhealthy.', 'sentence_4': 'His parents understood and decided to make a change.', 'sentence_5': 'They got themselves and Dan on a diet.' 'label': '1' } ``` The label is '1' since the first sentence should go at position number 1 (counting from zero) An example from the Binary Sentence Ordering task would look as follows: ``` {'sentence_1': 'When she walked in, she felt awkward.', 'sentence_2': 'Janet decided to go to her high school's party.', 'label': '0' } ``` The label is '0' because this is not the correct order of the sentences. It should be sentence_2 and then sentence_1. For more examples, you can refer the [original paper]((https://arxiv.org/pdf/1909.00142). ### Data Fields In this benchmark, all data fields are string, including the labels. ### Data Splits The data is split into training, validation and test set for each of the tasks in the benchmark. | Task and Dataset | Train | Valid | Test | | ----- | ------ | ----- | ---- | | Sentence Positioning: Arxiv| 10000 | 4000 | 4000| | Sentence Positioning: Rocstory| 10000 | 4000 | 4000| | Sentence Positioning: Wiki| 10000 | 4000 | 4000| | Binary Sentence Ordering: Arxiv| 20000 | 8000 | 8000| | Binary Sentence Ordering: Rocstory| 20000 | 8000 | 8000| | Binary Sentence Ordering: Wiki| 20000 | 8000 | 8000| | Discourse Coherence: Chat| 5816 | 1834 | 2418| | Discourse Coherence: Wiki| 10000 | 4000 | 4000| | Sentence Section Prediction | 10000 | 4000 | 4000 | | Discourse Relation: Penn Discourse Tree Bank: Implicit | 8693 | 2972 | 3024 | | Discourse Relation: Penn Discourse Tree Bank: Explicit | 9383 | 3613 | 3758 | | Discourse Relation: RST Discourse Tree Bank | 17051 | 2045 | 2308 | ## Additional Information ### Benchmark Creators This benchmark was created by Mingda Chen, Zewei Chu and Kevin Gimpel during work done at the University of Chicago and the Toyota Technologival Institute at Chicago. ### Citation Information ``` @inproceedings{mchen-discoeval-19, title = {Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations}, author = {Mingda Chen and Zewei Chu and Kevin Gimpel}, booktitle = {Proc. of {EMNLP}}, year={2019} } ``` ## Loading Data Examples ### Loading Data for Sentence Positioning Task with the Arxiv data source ```python from datasets import load_dataset # Load the Sentence Positioning dataset dataset = load_dataset(path="OfekGlick/DiscoEval", name="SParxiv") # Access the train, validation, and test splits train_data = dataset["train"] validation_data = dataset["validation"] test_data = dataset["test"] # Example usage: Print the first few training examples for example in train_data[:5]: print(example) ``` The other possible inputs for the `name` parameter are: `SParxiv`, `SProcstory`, `SPwiki`, `SSPabs`, `PDTB-I`, `PDTB-E`, `BSOarxiv`, `BSOrocstory`, `BSOwiki`, `DCchat`, `DCwiki`, `RST`
7,574
[ [ -0.019622802734375, -0.06219482421875, 0.0279693603515625, 0.0205078125, -0.0151214599609375, -0.0033473968505859375, -0.004207611083984375, -0.0209503173828125, -0.007659912109375, 0.0198822021484375, -0.02764892578125, -0.05255126953125, -0.03985595703125, ...
numeric_fused_head
2023-06-01T14:59:47.000Z
[ "task_categories:token-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:1K<n<10K", "source_datasets:original...
null
Fused Head constructions are noun phrases in which the head noun is missing and is said to be "fused" with its dependent modifier. This missing information is implicit and is important for sentence understanding.The missing heads are easily filled in by humans, but pose a challenge for computational models. For example, in the sentence: "I bought 5 apples but got only 4.", 4 is a Fused-Head, and the missing head is apples, which appear earlier in the sentence. This is a crowd-sourced dataset of 10k numerical fused head examples (1M tokens).
@article{elazar_head, author = {Elazar, Yanai and Goldberg, Yoav}, title = {Where’s My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution}, journal = {Transactions of the Association for Computational Linguistics}, volume = {7}, number = {}, pages = {519-535}, year = {2019}, doi = {10.1162/tacl\\_a\\_00280}, URL = {https://doi.org/10.1162/tacl_a_00280}, }
1
127
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced - expert-generated - machine-generated language_creators: - found language: - en license: - mit multilinguality: - monolingual size_categories: - 100K<n<1M - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: [] paperswithcode_id: numeric-fused-head pretty_name: Numeric Fused Heads tags: - fused-head-identification dataset_info: - config_name: identification features: - name: tokens sequence: string - name: start_index dtype: int32 - name: end_index dtype: int32 - name: label dtype: class_label: names: '0': neg '1': pos splits: - name: train num_bytes: 22290345 num_examples: 165606 - name: test num_bytes: 68282 num_examples: 500 - name: validation num_bytes: 2474528 num_examples: 18401 download_size: 24407520 dataset_size: 24833155 - config_name: resolution features: - name: tokens sequence: string - name: line_indices sequence: int32 - name: head sequence: string - name: speakers sequence: string - name: anchors_indices sequence: int32 splits: - name: train num_bytes: 19766437 num_examples: 7412 - name: test num_bytes: 2743071 num_examples: 1000 - name: validation num_bytes: 2633549 num_examples: 1000 download_size: 24923403 dataset_size: 25143057 config_names: - identification - resolution --- # Dataset Card for Numeric Fused Heads ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [The Numeric Fused-Head demo](https://nlp.biu.ac.il/~lazary/fh/) - **Repository:** [Github Repo](https://github.com/yanaiela/num_fh) - **Paper:** [Where’s My Head? Definition, Dataset and Models for Numeric Fused-Heads Identification and Resolution](https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00280) - **Leaderboard:** [NLP Progress](http://nlpprogress.com/english/missing_elements.html) - **Point of Contact:** [Yanai Elazar](https://yanaiela.github.io), [Yoav Goldberg](https://www.cs.bgu.ac.il/~yoavg/uni/) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards - Numeric Fused Head Identification - Numeric Fused Head Resolution ### Languages English ## Dataset Structure ### Data Instances ## Identification ``` { "tokens": ["It", "’s", "a", "curious", "thing", ",", "the", "death", "of", "a", "loved", "one", "."] "start_index": 11 "end_index": 12 "label": 1 } ``` ## Resolution ``` { "tokens": ["I", "'m", "eighty", "tomorrow", ".", "Are", "you", "sure", "?"], "line_indices": [0, 0, 0, 0, 0, 1, 1, 1, 1], "head": ["AGE"], "speakers": ["John Doe", "John Doe", "John Doe", "John Doe", "John Doe", "Joe Bloggs", "Joe Bloggs", "Joe Bloggs", "Joe Bloggs"], "anchors_indices": [2] } ``` ### Data Fields ## Identification - `tokens` - List of token strings as tokenized with [Spacy](spacy.io). - `start_index` - Start index of the anchor. - `end_index` - End index of the anchor. - `label` - "pos" or "neg" depending on whether this example contains a numeric fused head. ## Resolution - `tokens` - List of token strings as tokenized with [Spacy](spacy.io) - `line_indices` - List of indices indicating line number (one for each token) - `head` - Reference to the missing head. If the head exists elsewhere in the sentence this is given as a token index. - `speakers` - List of speaker names (one for each token) - `anchors_indices` - Index to indicate which token is the anchor (the visible number) ### Data Splits Train, Test, Dev [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information MIT License ### Citation Information ``` @article{doi:10.1162/tacl\_a\_00280, author = {Elazar, Yanai and Goldberg, Yoav}, title = {Where’s My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution}, journal = {Transactions of the Association for Computational Linguistics}, volume = {7}, number = {}, pages = {519-535}, year = {2019}, doi = {10.1162/tacl\_a\_00280}, } ``` ### Contributions Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
5,932
[ [ -0.041351318359375, -0.054595947265625, 0.019561767578125, 0.0259552001953125, -0.01036834716796875, 0.018463134765625, -0.0276031494140625, -0.019287109375, 0.053802490234375, 0.03717041015625, -0.07196044921875, -0.07159423828125, -0.03887939453125, 0.0252...
Paul/hatecheck-french
2022-07-05T10:40:23.000Z
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fr", "license:cc-by-4.0", "arxiv:2206.09917", "regi...
Paul
null
null
0
127
2022-07-05T10:39:16
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - fr license: - cc-by-4.0 multilinguality: - monolingual pretty_name: French HateCheck size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - hate-speech-detection --- # Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** paul@rewire.online ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
3,489
[ [ -0.046630859375, -0.052032470703125, -0.004009246826171875, 0.006687164306640625, -0.00839996337890625, 0.00782012939453125, -0.002201080322265625, -0.037078857421875, 0.029052734375, 0.0238037109375, -0.055145263671875, -0.056121826171875, -0.0408935546875, ...
metaeval/implicit-hate-stg1
2023-05-31T08:52:07.000Z
[ "task_categories:text-classification", "language:en", "license:unknown", "region:us" ]
metaeval
null
null
0
127
2023-04-17T08:27:05
--- license: unknown task_categories: - text-classification language: - en --- https://github.com/SALT-NLP/implicit-hate ``` @inproceedings{elsherief-etal-2021-latent, title = "Latent Hatred: A Benchmark for Understanding Implicit Hate Speech", author = "ElSherief, Mai and Ziems, Caleb and Muchlinski, David and Anupindi, Vaishnavi and Seybolt, Jordyn and De Choudhury, Munmun and Yang, Diyi", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.29", pages = "345--363" } ```
792
[ [ -0.033905029296875, -0.05865478515625, 0.03765869140625, 0.018890380859375, -0.01201629638671875, 0.021820068359375, -0.016387939453125, -0.04510498046875, 0.0149383544921875, 0.0023708343505859375, -0.043304443359375, -0.04632568359375, -0.0577392578125, 0....