id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
Blaise-g/PubMed_summ
2022-07-18T11:41:58.000Z
[ "region:us" ]
Blaise-g
null
null
null
0
3
Entry not found
relbert/analogy_questions
2023-05-16T20:24:12.000Z
[ "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:other", "region:us" ]
relbert
[Analogy Question](https://aclanthology.org/2021.acl-long.280/)
@inproceedings{ushio-etal-2021-bert, title = "{BERT} is to {NLP} what {A}lex{N}et is to {CV}: Can Pre-Trained Language Models Identify Analogies?", author = "Ushio, Asahi and Espinosa Anke, Luis and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.280", doi = "10.18653/v1/2021.acl-long.280", pages = "3609--3624", abstract = "Analogies play a central role in human commonsense reasoning. The ability to recognize analogies such as {``}eye is to seeing what ear is to hearing{''}, sometimes referred to as analogical proportions, shape how we structure knowledge and understand language. Surprisingly, however, the task of identifying such analogies has not yet received much attention in the language model era. In this paper, we analyze the capabilities of transformer-based language models on this unsupervised task, using benchmarks obtained from educational settings, as well as more commonly used datasets. We find that off-the-shelf language models can identify analogies to a certain extent, but struggle with abstract and complex relations, and results are highly sensitive to model architecture and hyperparameters. Overall the best results were obtained with GPT-2 and RoBERTa, while configurations using BERT were not able to outperform word embedding models. Our results raise important questions for future work about how, and to what extent, pre-trained language models capture knowledge about abstract semantic relations.", }
null
2
3
--- language: - en license: - other multilinguality: - monolingual size_categories: - n<1K pretty_name: Analogy Question --- # Dataset Card for "relbert/analogy_questions" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://aclanthology.org/2021.acl-long.280/](https://aclanthology.org/2021.acl-long.280/) - **Dataset:** Analogy Questions ### Dataset Summary This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/). - original analogy questions | name | Size (valid/test) | Num of choice | Num of relation group | Original Reference | |-----------|------------------:|--------------:|----------------------:|:--------------------------------------------------------------------------:| | `u2` | 24/228 | 5,4,3 | 9 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) | | `u4` | 48/432 | 5,4,3 | 5 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) | | `google` | 50/500 | 4 | 2 | [Mikolov et al., (2013)](https://www.aclweb.org/anthology/N13-1090.pdf) | | `bats` | 199/1799 | 4 | 3 | [Gladkova et al., (2016)](https://www.aclweb.org/anthology/N18-2017.pdf) | - extra analogy questions | name | Size (valid/test) | Num of choice (valid/test) | Num of relation group (valid/test) | Original Reference | |:------------------------------------|:--------------------|:-----------------------------|:-------------------------------------|:-----------------------------------------------------------------------------------------------------------------------| | `semeval2012_relational_similarity` | 79/- | 3/- | 79/- | [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) | | `t_rex_relational_similarity` | 496/183 | 74/48 | 60/19 | [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) | | `conceptnet_relational_similarity` | 1112/1192 | 19/17 | 18/16 | [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) | | `nell_relational_similarity` | 400/600 | 5/7 | 4/6 | [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) | | `scan` | 178/1616 | 3,36,136,10,45,78,15,21,55,120,153,91,28/3,36,136,10,45,78,15,21,55,120,153,91,28 | 2/2 | [relbert/scientific_and_creative_analogy](https://huggingface.co/datasets/relbert/scientific_and_creative_analogy) | ## Dataset Structure ### Data Instances An example of `test` looks as follows. ``` { "stem": ["raphael", "painter"], "answer": 2, "choice": [["andersen", "plato"], ["reading", "berkshire"], ["marx", "philosopher"], ["tolstoi", "edison"]] } ``` The `stem` is the query word pair, `choice` has word pair candidates, and `answer` indicates the index of correct candidate which starts from `0`. All data is lowercased except Google dataset. ### Citation Information ``` @inproceedings{ushio-etal-2021-bert-is, title ={{BERT} is to {NLP} what {A}lex{N}et is to {CV}: {C}an {P}re-{T}rained {L}anguage {M}odels {I}dentify {A}nalogies?}, author={Ushio, Asahi and Espinosa-Anke, Luis and Schockaert, Steven and Camacho-Collados, Jose}, booktitle={Proceedings of the {ACL}-{IJCNLP} 2021 Main Conference}, year={2021}, publisher={Association for Computational Linguistics} } ``` ### LICENSE The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
breakend/nllb-multi-domain
2022-08-09T20:44:23.000Z
[ "annotations_creators:found", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:unknown", "source_datasets:extended|flores", "language:en", "language:ru", "language:ayr", "language:bho", "language:dyu", "language:fur", "lang...
breakend
NLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences.
@article{nllb2022, author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang}, title = {No Language Left Behind: Scaling Human-Centered Machine Translation}, year = {2022} }
null
1
3
--- language: - en - ru - ayr - bho - dyu - fur - wol annotations_creators: - found language_creators: - expert-generated license: - cc-by-sa-4.0 multilinguality: - multilingual - translation pretty_name: nllb-multi-domain size_categories: - unknown source_datasets: - extended|flores task_categories: - conditional-text-generation task_ids: - machine-translation paperswithcode_id: flores --- # Dataset Card for NLLB Multi-Domain ## Table of Contents - [Dataset Card for NLLB Multi-Domain](#dataset-card-for-nllb-multi-domain) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Home:** [Flores](https://github.com/facebookresearch/flores/tree/main/nllb_md) - **Repository:** [Github](https://github.com/facebookresearch/flores/tree/main/nllb_md) ### Dataset Summary NLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences. ### Supported Tasks and Leaderboards #### Multilingual Machine Translation Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html). Flores 200 is an extention of this. ### Languages Language | FLORES-200 code ---|--- Central Aymara | ayr_Latn Bhojpuri | bho_Deva Dyula | dyu_Latn Friulian | fur_Latn Russian | rus_Cyrl Wolof | wol_Latn Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng_Latn-rus_Cyrl" will provide sentences in the format below). ## Dataset Structure ### Data Instances See Dataset Viewer. The text is provided as-in the original dataset, without further preprocessing or tokenization. ### Data Fields - `id`: Row number for the data entry, starting at 1. - `sentence`: The full sentence in the specific language (may have _lang for pairings) - `domain`: The domain of the sentence. ### Dataset Creation Please refer to the original article [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) for additional information on dataset creation. ## Additional Information ### Dataset Curators See paper for details. ### Licensing Information Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information Please cite the authors if you use these corpora in your work: ```bibtex @article{nllb2022, author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang}, title = {No Language Left Behind: Scaling Human-Centered Machine Translation}, year = {2022} } ``` Please also cite prior work that this dataset builds on: ```bibtex @inproceedings{, title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation}, author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela}, year={2021} } ``` ```bibtex @inproceedings{, title={Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English}, author={Guzm\'{a}n, Francisco and Chen, Peng-Jen and Ott, Myle and Pino, Juan and Lample, Guillaume and Koehn, Philipp and Chaudhary, Vishrav and Ranzato, Marc'Aurelio}, journal={arXiv preprint arXiv:1902.01382}, year={2019} } ```
ufukhaman/uspto_balanced_filtered_200k_ipc_patents
2022-07-19T18:50:11.000Z
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:USPTO", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:English", "license:mit", "patent", "refined_patents", "patent classification", "uspto", "ipc...
ufukhaman
null
null
null
0
3
--- annotations_creators: - USPTO language: - English license: - mit multilinguality: - monolingual pretty_name: uspto_balanced_filtered_200k_ipc_patents size_categories: - 100K<n<1M source_datasets: - original tags: - patent - refined_patents - patent classification - uspto - ipc task_categories: - text-classification task_ids: - topic-classification ---
arize-ai/fashion_mnist_quality_drift
2022-10-25T10:40:17.000Z
[ "task_categories:image-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|imdb", "language:en", "license:mit", "region:us" ]
arize-ai
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on product reviews from an e-commerce store. The reviews are labeled on a scale from 1 to 5 (stars). The training & validation sets are fully composed by reviews written in english. However, the production set has some reviews written in spanish. At Arize, we work to surface this issue and help you solve it.
# @InProceedings{huggingface:dataset, # title = {A great new dataset}, # author={huggingface, Inc. # }, # year={2020} # } #
null
2
3
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|imdb task_categories: - image-classification task_ids: - multi-class-classification pretty_name: sentiment-classification-reviews-with-drift --- # Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
kietzmannlab/ecoset
2022-10-21T15:11:44.000Z
[ "task_categories:image-classification", "task_ids:multi-class-classification", "task_ids:multi-class-image-classification", "source_datasets:original", "license:cc", "other-image-classification", "image-classification", "region:us" ]
kietzmannlab
Tired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images from 565 basic level categories, chosen to be both (i) frequent in linguistic usage, and (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’ is not). Here we collect resources associated with ecoset. This includes the dataset, trained deep neural network models, code to interact with them, and published papers using it.
@article{mehrer2021ecologically, title={An ecologically motivated image dataset for deep learning yields better models of human vision}, author={Mehrer, Johannes and Spoerer, Courtney J and Jones, Emer C and Kriegeskorte, Nikolaus and Kietzmann, Tim C}, journal={Proceedings of the National Academy of Sciences}, volume={118}, number={8}, year={2021}, publisher={National Acad Sciences} }
null
7
3
--- license: cc source_datasets: - original task_categories: - image-classification task_ids: - multi-class-classification - multi-class-image-classification paperswithcode_id: ecoset pretty_name: Ecoset tags: - other-image-classification - image-classification --- ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Installation](#installation) - [Install requirements](#install-requirements) - [Download settings](#download-settings) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.kietzmannlab.org/ecoset](https://www.kietzmannlab.org/ecoset/) - **Repository:** [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/9570390/tree/v1) - **Paper:** [https://www.pnas.org/doi/full/10.1073/pnas.2011417118](https://doi.org/10.1073/pnas.2011417118) - **Point of Contact:** [tim.kietzmann@uni-osnabrueck.de](tim.kietzmann@uni-osnabrueck.de) ### Dataset Summary Tired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images from 565 basic level categories, chosen to be both (i) frequent in linguistic usage, and (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’ is not). Ecoset is a typical image recognition dataset, combining images of objects with appropriate labels (one label per image). Importantly, ecoset is intended to provide higher ecological validity than its counterparts, with a mislabelling error rate < 5% and filtered for NSFW content. For more information on the dataset, consider reading the [original publication](https://doi.org/10.1073/pnas.2011417118). Ecoset consists of a train, test, and validation subset which all are openly available to the user. ### Supported Tasks and Leaderboards Ecoset is a large multi-class single-label object recognition image dataset (similar to ImageNet). ## Installation ### Install Requirements In order to work with ecoset, please make sure to install the s3 compatible version of huggingface datasets, which should include the `s3fs`, `botocore` and `boto3` modules: ```bash pip install datasets[s3] ``` If you want to work with the dataset in `Huggingface.datasets`, you might also want to make sure to install PIL (`pip install Pillow`) in order to work with image input. However, downloading the dataset will work despite not having installed PIL. ### Download Settings Please set `ignore_verifications=True`. when downloading this dataset, else the download will result in an error: ```python from datasets import load_dataset dataset = load_dataset("kietzmannlab/ecoset", ignore_verifications=True) ``` | NOTE: If you get errors like: `FileNotFoundError: [Errno 2] No such file or directory:'<DATASET_PATH>'` this is likely due do having previously downloaded the dataset and then cancelling the download. If this is the case for you, you can fix this error by manually removing the dataset path and reinstalling the dataset. | | --- | ## Dataset Structure We show detailed information for all the configurations of the dataset. Currently, there is only one setting (`Full`) available, containing all data. ### Data Instances #### Full - **Size of downloaded dataset files:** 155 GB - **Total amount of disk used:** 311 GB ## Dataset Creation A total of 565 categories were selected based on the following: 1) their word frequency in American television and film subtitles (SUBTLEX_US), 2) the perceived concreteness by human observers, and 3) the availability of a minimum of 700 images. Images were sourced via the overall ImageNet database (the same resource used for ILSVRC 2012) or obtained under CC BY-NC-SA 2.0 license from Bing image search and Flickr. Thorough data cleaning procedures were put in place to remove duplicates and to assure an expected misclassification rate per category of <4%. ### Curation Rationale More information on the curation of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118). ### Source Data The source data is available under: [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/9570390/tree/v1) ### Annotations Each ecoset image folder is annotated with class labels according to the main object depicted in a class of images. No further annotations are added to the dataset. ### Personal and Sensitive Information The dataset was tested to exclude sensitive images using Yahoo's Open NSFW detection model, removing all image with an NSFW score above 0.8. For this dataset, only images with secured license information was used, which should prevent the inclusion of images without consent of the image's authors and subjects. Despite these measures, it is possible that the images in the dataset contain personal and sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset Large-scale image-label datasets such as ImageNet are the backbone of modern Computer Vision. However, such large datasets often suffer from problems like mislabeling, category biases, misrepresentations, and unsafe content. Ecoset was created with the aim to reduce these biases and consequently improve the social impact of Computer Vision techniques trained on the dataset. More information on the social impact of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118). ### Discussion of Biases Despite best efforts to provide an ecologically valid and overall less biased dataset, ecoset is still likely to contain biased data. The category selection of ecoset was based on human concreteness ratings and word frequencies in a corpus consisting of American television and film subtitles. This undoubtedly biases the category selection toward Western cultures. Image inclusion was based on the availability via Bing/Flickr search results as well as the existence of relevant ImageNet categories. Images depicting people, specifically the categories “man,” “woman,” and “child,” were not sampled according to census distributions (age, ethnicity, gender, etc.). ### Other Known Limitations In addition to points mentioned in [Discussion of Biases](#discussion-of-biases), ecoset image and category distributions do not reflect the naturalistic, egocentric visual input typically encountered in the everyday life of infant and adults. ## Additional Information ### Dataset Curators The corpus was put together by Johannes Mehrer, Courtney J. Spoerer, Emer C. Jones, Nikolaus Kriegeskorte, and Tim C. Kietzmann. ### Licensing Information Ecoset is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license (cc-by-nc-sa-2.0). ### Citation Information ``` @article{mehrer2021ecologically, title={An ecologically motivated image dataset for deep learning yields better models of human vision}, author={Mehrer, Johannes and Spoerer, Courtney J and Jones, Emer C and Kriegeskorte, Nikolaus and Kietzmann, Tim C}, journal={Proceedings of the National Academy of Sciences}, volume={118}, number={8}, pages={e2011417118}, year={2021}, publisher={National Acad Sciences} } ``` ### Contributions The ecoset dataloader and dataset card was created by [@DiGyt](https://github.com/DiGyt) on behalf of [@kietzmannlab](https://huggingface.co/kietzmannlab). For questions and suggestions feel free to reach out.
Vipitis/Shadertoys-fine
2023-05-04T22:37:17.000Z
[ "task_categories:text-generation", "annotations_creators:no-annotation", "language_creators:machine-generated", "size_categories:100K<n<1M", "language:en", "language:code", "license:cc-by-nc-sa-3.0", "code", "region:us" ]
Vipitis
null
null
null
2
3
--- annotations_creators: - no-annotation language: - en - code language_creators: - machine-generated license: - cc-by-nc-sa-3.0 multilinguality: [] pretty_name: Shadertoys-fine size_categories: - 100K<n<1M source_datasets: [] tags: - code task_categories: - text-generation task_ids: [] dataset_info: - config_name: default features: - name: name dtype: string - name: code dtype: string - name: source dtype: string - name: author dtype: string splits: - name: train - name: test download_size: 154529204 dataset_size: 0 - config_name: fine features: - name: name dtype: string - name: code dtype: string - name: source dtype: string - name: author dtype: string splits: - name: train num_bytes: 119963236 num_examples: 226910 - name: test num_bytes: 20003783 num_examples: 38356 download_size: 154529204 dataset_size: 139967019 - config_name: return_completion features: - name: body dtype: string - name: return_statement dtype: string splits: - name: train num_bytes: 37597125 num_examples: 84843 - name: test num_bytes: 6360131 num_examples: 14248 download_size: 154529204 dataset_size: 43957256 --- # Dataset Card for Shadertoys-fine ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Licensing Information](#licensing-information) ## Dataset Description - **Repository:** https://github.com/Vipitis/project (private placeholder) ### Dataset Summary fine variant of the Shadertoys dataset (still WIP), where individual functions are avaialable as Datapoints. ### Supported Tasks and Leaderboards `language-modeling`: The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages. ### Languages - English (names, comments) - Shadercode **programming** language ## Dataset Structure ### Data Instances A data point consists of the function string, it's name as well as a bit of metadata like the author and source URL. (in the future there might be a function string without comments) ``` { 'name': '<type> <name>', 'code': '<type> <name>(<inputs>) { <body> return <outputs>; }\n', 'source': 'https://shadertoy.com/view/<shaderID>', 'author': '<username>' } ``` A data point in the `return_completion` subset for the return-completion task in [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderEval) includes just two features: ``` { 'body': '<type> <name> <type> <name>(<inputs>) { <body> return', 'return_statment': ' <outputs>: }\n', } ``` ### Data Fields - 'name' funciton identifier composed of the type and the name of the function - 'code' the raw code (including comments) of function. - 'source' URL to the shader. It might be on a different renderpass - 'author' username of the shader author - 'body' the body of the function without the return statement (no comments) - 'return_statment' the return statement of the function. everything infront of the semicolon is kept and white sapces are stripped in the custome Evaluator. ### Data Splits Currently available (shuffled): - train (85.0%) - test (15.0%) These splits should be indexed the same across both subsets. So if you are fine-tuning on the `fine` subset you won't get exposed to the `return_completion` test split. However there are many duplicates among both subsets and splits. ## Dataset Creation Data retrieved starting 2022-07-20 ### Source Data #### Initial Data Collection and Normalization All data was collected via the [Shadertoy.com API](https://www.shadertoy.com/howto#q2) and then by looking for keywords and counting curly brackets to figure out what is part of a function and what isn't. #### Who are the source language producers? Shadertoy.com contributers which publish shaders as 'public+API' ## Licensing Information The Default [licnese for each Shader](https://www.shadertoy.com/terms) is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached. The Dataset is currently not filtering for any licensis.
bongsoo/kowiki20220620
2022-10-05T00:08:42.000Z
[ "language:ko", "license:apache-2.0", "region:us" ]
bongsoo
null
null
null
0
3
--- language: - ko license: apache-2.0 --- -kowiki202206 1줄 말뭉치
nateraw/fsd50k
2022-07-26T04:44:10.000Z
[ "region:us" ]
nateraw
null
null
null
0
3
Entry not found
biglam/berlin_state_library_ocr
2022-08-05T09:36:24.000Z
[ "task_categories:fill-mask", "task_categories:text-generation", "task_ids:masked-language-modeling", "task_ids:language-modeling", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:1M<n<10M", "language:de", "language:nl...
biglam
null
null
null
6
3
--- annotations_creators: - machine-generated language: - de - nl - en - fr - es language_creators: - expert-generated license: - cc-by-4.0 multilinguality: - multilingual pretty_name: Berlin State Library OCR size_categories: - 1M<n<10M source_datasets: [] tags: - ocr - library task_categories: - fill-mask - text-generation task_ids: - masked-language-modeling - language-modeling --- # Dataset Card for Berlin State Library OCR data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary > The digital collections of the SBB contain 153,942 digitized works from the time period of 1470 to 1945. > At the time of publication, 28,909 works have been OCR-processed resulting in 4,988,099 full-text pages. For each page with OCR text, the language has been determined by langid (Lui/Baldwin 2012). ### Supported Tasks and Leaderboards - `language-modeling`: this dataset has the potential to be used for training language models on historical/OCR'd text. Since it contains OCR confidence, language and date information for many examples, it is also possible to filter this dataset to more closely match the requirements for training data. - ### Languages The collection includes material across a large number of languages. The languages of the OCR text have been detected using [langid.py: An Off-the-shelf Language Identification Tool](https://aclanthology.org/P12-3005) (Lui & Baldwin, ACL 2012). The dataset includes a confidence score for the language prediction. **Note:** not all examples may have been successfully matched to the language prediction table from the original data. The frequency of the top ten languages in the dataset is shown below: | | frequency | |----|------------------| | de | 3.20963e+06 | | nl | 491322 | | en | 473496 | | fr | 216210 | | es | 68869 | | lb | 33625 | | la | 27397 | | pl | 17458 | | it | 16012 | | zh | 11971 | [More Information Needed] ## Dataset Structure ### Data Instances Each example represents a single page of OCR'd text. A single example of the dataset is as follows: ```python {'aut': 'Doré, Henri', 'date': '1912', 'file name': '00000218.xml', 'language': 'fr', 'language_confidence': 1.0, 'place': 'Chang-hai', 'ppn': '646426230', 'publisher': 'Imprimerie de la Mission Catholique', 'text': "— 338 — Cela fait, on enterre la statuette qu’on vient d’outrager, atten dant la réalisation sur la personne elle-même. C’est l’outrage en effigie. Un deuxième moyen, c’est de représenter l’Esprit Vengeur sous la figure d’un fier-à-bras, armé d’un sabre, ou d’une pique, et de lui confier tout le soin de sa vengeance. On multiplie les incantations et les offrandes en son honneur, pour le porter au paroxysme de la fureur, et inspirer à l’Esprit malin l’idée de l’exécution de ses désirs : en un mot, on fait tout pour faire passer en son cœur la rage de vengeance qui consume le sien propre. C’est une invention diabolique imaginée pour assouvir sa haine sur l’ennemi qu’on a en horreur. Ailleurs, ce n’est qu’une figurine en bois ou en papier, qui est lancée contre l’ennemi; elle se dissimule, ou prend des formes fantastiques pour acomplir son œuvre de vengeance. Qu’on se rappelle la panique qui régna dans la ville de Nan- king ifâ ffl, et ailleurs, l’année où de méchantes gens répandirent le bruit que des hommes de papier volaient en l’air et coupaient les tresses de cheveux des Chinois. Ce fut une véritable terreur, tous étaient affolés, et il y eut à cette occasion de vrais actes de sauvagerie. Voir historiettes sur les envoûtements : Wieger Folk-Lore, N os 50, 128, 157, 158, 159. Corollaire. Les Tao-niu jift fx ou femmes “ Tao-clie'’. A cette super stition peut se rapporter la pratique des magiciennes du Kiang- sou ■n: m, dans les environs de Chang-hai ± m, par exemple. Ces femmes portent constamment avec- elles une statue réputée merveilleuse : elle n’a que quatre ou cinq pouces de hauteur ordinairement. A force de prières, d’incantations, elles finissent par la rendre illuminée, vivante et parlante, ou plutôt piaillarde, car elle ne répond que par des petits cris aigus et répétés aux demandes qu’on lui adressé; elle paraît comme animée, sautille,", 'title': 'Les pratiques superstitieuses', 'wc': [1.0, 0.7266666889, 1.0, 0.9950000048, 0.7059999704, 0.5799999833, 0.7142857313, 0.7250000238, 0.9855555296, 0.6880000234, 0.7099999785, 0.7054545283, 1.0, 0.8125, 0.7950000167, 0.5681818128, 0.5500000119, 0.7900000215, 0.7662500143, 0.8830000162, 0.9359999895, 0.7411110997, 0.7950000167, 0.7962499857, 0.6949999928, 0.8937500119, 0.6299999952, 0.8820000291, 1.0, 0.6781818271, 0.7649999857, 0.437142849, 1.0, 1.0, 0.7416666746, 0.6474999785, 0.8166666627, 0.6825000048, 0.75, 0.7033333182, 0.7599999905, 0.7639999986, 0.7516666651, 1.0, 1.0, 0.5466666818, 0.7571428418, 0.8450000286, 1.0, 0.9350000024, 1.0, 1.0, 0.7099999785, 0.7250000238, 0.8588888645, 0.8366666436, 0.7966666818, 1.0, 0.9066666961, 0.7288888693, 1.0, 0.8333333135, 0.8787500262, 0.6949999928, 0.8849999905, 0.5816666484, 0.5899999738, 0.7922222018, 1.0, 1.0, 0.6657142639, 0.8650000095, 0.7674999833, 0.6000000238, 0.9737499952, 0.8140000105, 0.978333354, 1.0, 0.7799999714, 0.6650000215, 1.0, 0.823333323, 1.0, 0.9599999785, 0.6349999905, 1.0, 0.9599999785, 0.6025000215, 0.8525000215, 0.4875000119, 0.675999999, 0.8833333254, 0.6650000215, 0.7566666603, 0.6200000048, 0.5049999952, 0.4524999857, 1.0, 0.7711111307, 0.6666666865, 0.7128571272, 1.0, 0.8700000048, 0.6728571653, 1.0, 0.6800000072, 0.6499999762, 0.8259999752, 0.7662500143, 0.6725000143, 0.8362500072, 1.0, 0.6600000262, 0.6299999952, 0.6825000048, 0.7220000029, 1.0, 1.0, 0.6587499976, 0.6822222471, 1.0, 0.8339999914, 0.6449999809, 0.7062500119, 0.9150000215, 0.8824999928, 0.6700000167, 0.7250000238, 0.8285714388, 0.5400000215, 1.0, 0.7966666818, 0.7350000143, 0.6188889146, 0.6499999762, 1.0, 0.7459999919, 0.5799999833, 0.7480000257, 1.0, 0.9333333373, 0.790833354, 0.5550000072, 0.6700000167, 0.7766666412, 0.8280000091, 0.7250000238, 0.8669999838, 0.5899999738, 1.0, 0.7562500238, 1.0, 0.7799999714, 0.8500000238, 0.4819999933, 0.9350000024, 1.0, 0.8399999738, 0.7950000167, 1.0, 0.9474999905, 0.453333348, 0.6575000286, 0.9399999976, 0.6733333468, 0.8042857051, 0.7599999905, 1.0, 0.7355555296, 0.6499999762, 0.7118181586, 1.0, 0.621999979, 0.7200000286, 1.0, 0.853333354, 0.6650000215, 0.75, 0.7787500024, 1.0, 0.8840000033, 1.0, 0.851111114, 1.0, 0.9142857194, 1.0, 0.8899999857, 1.0, 0.9024999738, 1.0, 0.6166666746, 0.7533333302, 0.7766666412, 0.6637499928, 1.0, 0.8471428752, 0.7012500167, 0.6600000262, 0.8199999928, 1.0, 0.7766666412, 0.3899999857, 0.7960000038, 0.8050000072, 1.0, 0.8000000119, 0.7620000243, 1.0, 0.7163636088, 0.5699999928, 0.8849999905, 0.6166666746, 0.8799999952, 0.9058333039, 1.0, 0.6866666675, 0.7810000181, 0.3400000036, 0.2599999905, 0.6333333254, 0.6524999738, 0.4875000119, 0.7425000072, 0.75, 0.6863636374, 1.0, 0.8742856979, 0.137500003, 0.2099999934, 0.4199999869, 0.8216666579, 1.0, 0.7563636303, 0.3000000119, 0.8579999804, 0.6679999828, 0.7099999785, 0.7875000238, 0.9499999881, 0.5799999833, 0.9150000215, 0.6600000262, 0.8066666722, 0.729090929, 0.6999999881, 0.7400000095, 0.8066666722, 0.2866666615, 0.6700000167, 0.9225000143, 1.0, 0.7599999905, 0.75, 0.6899999976, 0.3600000143, 0.224999994, 0.5799999833, 0.8874999881, 1.0, 0.8066666722, 0.8985714316, 0.8827272654, 0.8460000157, 0.8880000114, 0.9533333182, 0.7966666818, 0.75, 0.8941666484, 1.0, 0.8450000286, 0.8666666746, 0.9533333182, 0.5883333087, 0.5799999833, 0.6549999714, 0.8600000143, 1.0, 0.7585714459, 0.7114285827, 1.0, 0.8519999981, 0.7250000238, 0.7437499762, 0.6639999747, 0.8939999938, 0.8877778053, 0.7300000191, 1.0, 0.8766666651, 0.8019999862, 0.8928571343, 1.0, 0.853333354, 0.5049999952, 0.5416666865, 0.7963636518, 0.5600000024, 0.8774999976, 0.6299999952, 0.5749999881, 0.8199999928, 0.7766666412, 1.0, 0.9850000143, 0.5674999952, 0.6240000129, 1.0, 0.9485714436, 1.0, 0.8174999952, 0.7919999957, 0.6266666651, 0.7887499928, 0.7825000286, 0.5366666913, 0.65200001, 0.832857132, 0.7488889098]} ``` ### Data Fields - 'file name': filename of the original XML file - 'text': OCR'd text for that page of the item - 'wc': the word confidence for each token predicted by the OCR engine - 'ppn': 'Pica production numbers' an internal ID used by the library. See [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2702544.svg)](https://doi.org/10.5281/zenodo.2702544) for more details. 'language': language predicted by `langid.py` (see above for more details) -'language_confidence': confidence score given by `langid.py` - publisher: publisher of the item in which the text appears - place: place of publication of the item in which the text appears - date: date of the item in which the text appears - title: title of the item in which the text appears - aut: author of the item in which the text appears [More Information Needed] ### Data Splits This dataset contains only a single split `train`. ## Dataset Creation The dataset is created from [OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)](https://doi.org/10.5281/zenodo.3257041) hosted on Zenodo. ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The dataset is created from [OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)](https://doi.org/10.5281/zenodo.3257041) hosted on Zenodo. This dataset includes text content produced through running Optical Character Recognition across 153,942 digitized works held by the Berlin State Library. The [dataprep.ipynb](https://huggingface.co/datasets/biglam/berlin_state_library_ocr/blob/main/dataprep.ipynb) was used to create this dataset. To make the dataset more useful for training language models, the following steps were carried out: - the CSV `xml2csv_alto.csv`, which contains the full text corpus per document page (incl.OCR word confidences) was loaded using the `datasets` library - this CSV was augmented with language information from `corpus-language.pkl` **note** some examples don't find a match for this. Sometimes this is because a text is blank, but some actual text may be missing predicted language information - the CSV was further augmented by trying to map the PPN to fields in a metadata download created using [https://github.com/elektrobohemian/StabiHacks/blob/master/oai-analyzer/oai-analyzer.py](https://github.com/elektrobohemian/StabiHacks/blob/master/oai-analyzer/oai-analyzer.py). **note** not all examples are successfully matched to this metadata download. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process This dataset contains machine-produced annotations for: - the confidence scores the OCR engines used to produce the full-text materials. - the predicted languages and associated confidence scores produced by `langid.py` The dataset also contains metadata for the following fields: - author - publisher - the place of publication - title #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information This dataset contains historical material, potentially including names, addresses etc., but these are not likely to refer to living individuals. [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases As with any historical material, the views and attitudes expressed in some texts will likely diverge from contemporary beliefs. One should consider carefully how this potential bias may become reflected in language models trained on this data. [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Initial data created by: Labusch, Kai; Zellhöfer, David ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @dataset{labusch_kai_2019_3257041, author = {Labusch, Kai and Zellhöfer, David}, title = {{OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)}}, month = jun, year = 2019, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.3257041}, url = {https://doi.org/10.5281/zenodo.3257041} } ``` ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
chintagunta85/bionlp2
2022-07-28T09:04:24.000Z
[ "region:us" ]
chintagunta85
[BioNLP2004 NER dataset](https://aclanthology.org/W04-1213.pdf)
@inproceedings{collier-kim-2004-introduction, title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}", author = "Collier, Nigel and Kim, Jin-Dong", booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})", month = aug # " 28th and 29th", year = "2004", address = "Geneva, Switzerland", publisher = "COLING", url = "https://aclanthology.org/W04-1213", pages = "73--78", } https://huggingface.co/datasets/chintagunta85/bionlp/raw/main/test_bionlp.json
null
0
3
Entry not found
biglam/yalta_ai_tabular_dataset
2022-10-23T21:56:38.000Z
[ "task_categories:object-detection", "annotations_creators:expert-generated", "language_creators:expert-generated", "size_categories:n<1K", "license:cc-by-4.0", "manuscripts", "LAM", "arxiv:2207.11230", "region:us" ]
biglam
Yalt AI Tabular Dataset
@dataset{clerice_thibault_2022_6827706, author = {Clérice, Thibault}, title = {YALTAi: Tabular Dataset}, month = jul, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.6827706}, url = {https://doi.org/10.5281/zenodo.6827706} }
null
1
3
--- annotations_creators: - expert-generated language: [] language_creators: - expert-generated license: - cc-by-4.0 multilinguality: [] pretty_name: YALTAi Tabular Dataset size_categories: - n<1K source_datasets: [] tags: - manuscripts - LAM task_categories: - object-detection task_ids: [] --- # YALTAi Tabular Dataset ## Table of Contents - [YALTAi Tabular Dataset](#YALTAi-Tabular-Dataset) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://doi.org/10.5281/zenodo.6827706](https://doi.org/10.5281/zenodo.6827706) - **Paper:** [https://arxiv.org/abs/2207.11230](https://arxiv.org/abs/2207.11230) ### Dataset Summary This dataset contains a subset of data used in the paper [You Actually Look Twice At it (YALTAi): using an object detectionapproach instead of region segmentation within the Kraken engine](https://arxiv.org/abs/2207.11230). This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects "Header", "Col", "Marginal", "text". ### Supported Tasks and Leaderboards - `object-detection`: This dataset can be used to train a model for object-detection on historic document images. ## Dataset Structure This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines. - The first configuration, `YOLO`, uses the data's original format. - The second configuration converts the YOLO format into a format which is closer to the `COCO` annotation format. This is done to make it easier to work with the `feature_extractor`s from the `Transformers` models for object detection, which expect data to be in a COCO style format. ### Data Instances An example instance from the COCO config: ``` {'height': 2944, 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FA413CDA210>, 'image_id': 0, 'objects': [{'area': 435956, 'bbox': [0.0, 244.0, 1493.0, 292.0], 'category_id': 0, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 88234, 'bbox': [305.0, 127.0, 562.0, 157.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 5244, 'bbox': [1416.0, 196.0, 92.0, 57.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 5720, 'bbox': [1681.0, 182.0, 88.0, 65.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 374085, 'bbox': [0.0, 540.0, 163.0, 2295.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 577599, 'bbox': [104.0, 537.0, 253.0, 2283.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 598670, 'bbox': [304.0, 533.0, 262.0, 2285.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 56, 'bbox': [284.0, 539.0, 8.0, 7.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 1868412, 'bbox': [498.0, 513.0, 812.0, 2301.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 307800, 'bbox': [1250.0, 512.0, 135.0, 2280.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 494109, 'bbox': [1330.0, 503.0, 217.0, 2277.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 52, 'bbox': [1734.0, 1013.0, 4.0, 13.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 90666, 'bbox': [0.0, 1151.0, 54.0, 1679.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}], 'width': 2064} ``` An example instance from the YOLO config: ``` python {'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FAA140F2450>, 'objects': {'bbox': [[747, 390, 1493, 292], [586, 206, 562, 157], [1463, 225, 92, 57], [1725, 215, 88, 65], [80, 1688, 163, 2295], [231, 1678, 253, 2283], [435, 1675, 262, 2285], [288, 543, 8, 7], [905, 1663, 812, 2301], [1318, 1653, 135, 2280], [1439, 1642, 217, 2277], [1737, 1019, 4, 13], [26, 1991, 54, 1679]], 'label': [0, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1]}} ``` ### Data Fields The fields for the YOLO config: - `image`: the image - `objects`: the annotations which consist of: - `bbox`: a list of bounding boxes for the image - `label`: a list of labels for this image The fields for the COCO config: - `height`: height of the image - `width`: width of the image - `image`: image - `image_id`: id for the image - `objects`: annotations in COCO format, consisting of a list containing dictionaries with the following keys: - `bbox`: bounding boxes for the images - `category_id`: a label for the image - `image_id`: id for the image - `iscrowd`: COCO `iscrowd` flag - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts) ### Data Splits The dataset contains a train, validation and test split with the following numbers per split: | | train | validation | test | |----------|-------|------------|------| | examples | 196 | 22 | 135 | ## Dataset Creation > [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8 . ### Curation Rationale This dataset was created to produce a simplified version of the [Lectaurep Repertoires dataset](https://github.com/HTR-United/lectaurep-repertoires), which was found to contain: > around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8 ### Source Data #### Initial Data Collection and Normalization The LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the Minutier central des notaires de Paris of the National Archives, the [ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities)](https://www.inria.fr/en/almanach) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture. > The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maître Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745. #### Who are the source language producers? [More information needed] ### Annotations | | Train | Dev | Test | Total | Average area | Median area | |----------|-------|-----|------|-------|--------------|-------------| | Col | 724 | 105 | 829 | 1658 | 9.32 | 6.33 | | Header | 103 | 15 | 42 | 160 | 6.78 | 7.10 | | Marginal | 60 | 8 | 0 | 68 | 0.70 | 0.71 | | Text | 13 | 5 | 0 | 18 | 0.01 | 0.00 | | | | | - | | | | #### Annotation process [More information needed] #### Who are the annotators? [More information needed] ### Personal and Sensitive Information This data does not contain information relating to living individuals. ## Considerations for Using the Data ### Social Impact of Dataset A growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition. ### Discussion of Biases Historical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed. ### Other Known Limitations [More information needed] ## Additional Information ### Dataset Curators ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @dataset{clerice_thibault_2022_6827706, author = {Clérice, Thibault}, title = {YALTAi: Tabular Dataset}, month = jul, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.6827706}, url = {https://doi.org/10.5281/zenodo.6827706} } ``` [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6827706.svg)](https://doi.org/10.5281/zenodo.6827706) ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
BirdL/DALL-E-Dogs
2022-09-28T21:09:11.000Z
[ "task_categories:image-classification", "task_categories:unconditional-image-generation", "size_categories:1K<n<10K", "license:other", "region:us" ]
BirdL
null
null
null
1
3
--- annotations_creators: [] language: [] language_creators: [] license: - other multilinguality: [] pretty_name: DALL-E Cats Dataset size_categories: - 1K<n<10K source_datasets: [] tags: [] task_categories: - image-classification - unconditional-image-generation task_ids: [] --- DALL-E-Dogs is a dataset meant to produce a synthetic animal dataset. This is a precursor to DALL-E-Cats. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/)
alex-apostolo/filtered-cuad
2022-08-04T06:24:04.000Z
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:cuad", "language:en", "license:cc-by-4.0", "arxiv:2103.06...
alex-apostolo
null
null
null
1
3
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - cuad task_categories: - question-answering task_ids: - closed-domain-qa - extractive-qa paperswithcode_id: cuad pretty_name: CUAD train-eval-index: - config: default task: question-answering task_id: extractive_question_answering splits: train_split: train eval_split: test col_mapping: question: question context: context answers: text: text answer_start: answer_start metrics: - type: cuad name: CUAD --- # Dataset Card for filtered_cuad ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad) - **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/) - **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268) - **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org) ### Dataset Summary Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. This dataset is a filtered version of CUAD. It excludes legal contracts with an Agreement date prior to 2002 and contracts which are not Business to Business. From the 41 categories we filtered them down to 12 which we considered the most crucial. We wanted a small dataset to quickly fine-tune different models without sacrificing the categories which we deemed as important. The need to remove most questions was due to them not having an answer which is problematic since it can scue the resulting metrics such as the F1 score and the AUPR curve. CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset contains samples in English only. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [44], "text": ['DISTRIBUTOR AGREEMENT'] }, "context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...', "id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0", "question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract", "title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT" } ``` ### Data Fields - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits This dataset is split into train/test set. Number of samples in each set is given below: | | Train | Test | | ----- | ------ | ---- | | CUAD | 5442 | 936 | ## Dataset Creation ### Curation Rationale A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring. Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies. To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack. ### Source Data #### Initial Data Collection and Normalization The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet. Type of Contracts: # of Docs Affiliate Agreement: 8 Agency Agreement: 8 Collaboration/Cooperation Agreement: 26 Co-Branding Agreement: 6 Consulting Agreement: 11 Development Agreement: 28 Distributor Agreement: 23 Endorsement Agreement: 10 Franchise Agreement: 14 Hosting Agreement: 12 IP Agreement: 16 Joint Venture Agreemen: 22 License Agreement: 32 Maintenance Agreement: 24 Manufacturing Agreement: 6 Marketing Agreement: 16 Non-Compete/No-Solicit/Non-Disparagement Agreement: 3 Outsourcing Agreement: 12 Promotion Agreement: 9 Reseller Agreement: 12 Service Agreement: 24 Sponsorship Agreement: 17 Supply Agreement: 13 Strategic Alliance Agreement: 32 Transportation Agreement: 1 TOTAL: 385 Categories Document Name Parties Agreement Date Effective Date Expiration Date Renewal Term Notice Period To Terminate Renewal Governing Law Non-Compete Exclusivity Change Of Control Anti-Assignment #### Who are the source language producers? The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD. ### Annotations #### Annotation process The labeling process included multiple steps to ensure accuracy: 1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours. 2. Law Student Label: law students conducted manual contract review and labeling in eBrevia. 3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step. 4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled. 5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly. 6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels. 7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer. #### Who are the annotators? Answered in above section. ### Personal and Sensitive Information Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”). For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”. For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”. Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows: THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION. Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category. To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.” Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Attorney Advisors Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu Law Student Leaders John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran Law Student Contributors Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin Technical Advisors & Contributors Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen ### Licensing Information CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use. The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR. Privacy Policy & Disclaimers The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved. The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer. ### Citation Information ``` @article{hendrycks2021cuad, title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball}, journal={arXiv preprint arXiv:2103.06268}, year={2021} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
andreagasparini/librispeech_train_clean_only
2022-08-06T10:49:25.000Z
[ "region:us" ]
andreagasparini
LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.87
@inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} }
null
0
3
Entry not found
biglam/yalta_ai_segmonto_manuscript_dataset
2022-08-12T08:33:43.000Z
[ "task_categories:object-detection", "annotations_creators:expert-generated", "language_creators:expert-generated", "size_categories:n<1K", "license:cc-by-4.0", "manuscripts", "LAM", "arxiv:2207.11230", "region:us" ]
biglam
YALTAi: Segmonto Manuscript and Early Printed Book Dataset
@dataset{clerice_thibault_2022_6814770, author = {Clérice, Thibault}, title = {{YALTAi: Segmonto Manuscript and Early Printed Book Dataset}}, month = jul, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.6814770}, url = {https://doi.org/10.5281/zenodo.6814770}
null
1
3
--- annotations_creators: - expert-generated language: [] language_creators: - expert-generated license: - cc-by-4.0 multilinguality: [] pretty_name: YALTAi Tabular Dataset size_categories: - n<1K source_datasets: [] tags: - manuscripts - LAM task_categories: - object-detection task_ids: [] --- # YALTAi Segmonto Manuscript and Early Printed Book Dataset ## Table of Contents - [YALTAi Segmonto Manuscript and Early Printed Book Dataset](#Segmonto Manuscript and Early Printed Book Dataset) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://doi.org/10.5281/zenodo.6814770](https://doi.org/10.5281/zenodo.6814770) - **Paper:** [https://arxiv.org/abs/2207.11230](https://arxiv.org/abs/2207.11230) ### Dataset Summary This dataset contains a subset of data used in the paper [You Actually Look Twice At it (YALTAi): using an object detection approach instead of region segmentation within the Kraken engine](https://arxiv.org/abs/2207.11230). This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset contains images from digitised manuscripts and early printed books with the following labels: - DamageZone - DigitizationArtefactZone - DropCapitalZone - GraphicZone - MainZone - MarginTextZone - MusicZone - NumberingZone - QuireMarksZone - RunningTitleZone - SealZone - StampZone - TableZone - TitlePageZone ### Supported Tasks and Leaderboards - `object-detection`: This dataset can be used to train a model for object-detection on historic document images. ## Dataset Structure This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines. - The first configuration, `YOLO`, uses the data's original format. - The second configuration converts the YOLO format into a format closer to the `COCO` annotation format. This is done to make it easier to work with the `feature_extractor` from the `Transformers` models for object detection, which expect data to be in a COCO style format. ### Data Instances An example instance from the COCO config: ```python {'height': 5610, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3782x5610 at 0x7F3B785609D0>, 'image_id': 0, 'objects': [{'area': 203660, 'bbox': [1545.0, 207.0, 1198.0, 170.0], 'category_id': 9, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 137034, 'bbox': [912.0, 1296.0, 414.0, 331.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 110865, 'bbox': [2324.0, 908.0, 389.0, 285.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 281634, 'bbox': [2308.0, 3507.0, 438.0, 643.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 5064268, 'bbox': [949.0, 471.0, 1286.0, 3938.0], 'category_id': 4, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 5095104, 'bbox': [2303.0, 539.0, 1338.0, 3808.0], 'category_id': 4, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}], 'width': 3782} ``` An example instance from the YOLO config: ```python {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3782x5610 at 0x7F3B785EFA90>, 'objects': {'bbox': [[2144, 292, 1198, 170], [1120, 1462, 414, 331], [2519, 1050, 389, 285], [2527, 3828, 438, 643], [1593, 2441, 1286, 3938], [2972, 2444, 1338, 3808]], 'label': [9, 2, 2, 2, 4, 4]}} ``` ### Data Fields The fields for the YOLO config: - `image`: the image - `objects`: the annotations which consist of: - `bbox`: a list of bounding boxes for the image - `label`: a list of labels for this image The fields for the COCO config: - `height`: height of the image - `width`: width of the image - `image`: image - `image_id`: id for the image - `objects`: annotations in COCO format, consisting of a list containing dictionaries with the following keys: - `bbox`: bounding boxes for the images - `category_id`: a label for the image - `image_id`: id for the image - `iscrowd`: COCO is a crowd flag - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts) ### Data Splits The dataset contains a train, validation and test split with the following numbers per split: | Dataset | Number of images | |---------|------------------| | Train | 854 | | Dev | 154 | | Test | 139 | A more detailed summary of the dataset (copied from the paper): | | Train | Dev | Test | Total | Average area | Median area | |--------------------------|------:|----:|-----:|------:|-------------:|------------:| | DropCapitalZone | 1537 | 180 | 222 | 1939 | 0.45 | 0.26 | | MainZone | 1408 | 253 | 258 | 1919 | 28.86 | 26.43 | | NumberingZone | 421 | 57 | 76 | 554 | 0.18 | 0.14 | | MarginTextZone | 396 | 59 | 49 | 504 | 1.19 | 0.52 | | GraphicZone | 289 | 54 | 50 | 393 | 8.56 | 4.31 | | MusicZone | 237 | 71 | 0 | 308 | 1.22 | 1.09 | | RunningTitleZone | 137 | 25 | 18 | 180 | 0.95 | 0.84 | | QuireMarksZone | 65 | 18 | 9 | 92 | 0.25 | 0.21 | | StampZone | 85 | 5 | 1 | 91 | 1.69 | 1.14 | | DigitizationArtefactZone | 1 | 0 | 32 | 33 | 2.89 | 2.79 | | DamageZone | 6 | 1 | 14 | 21 | 1.50 | 0.02 | | TitlePageZone | 4 | 0 | 1 | 5 | 48.27 | 63.39 | ## Dataset Creation This dataset is derived from: - CREMMA Medieval ( Pinche, A. (2022). Cremma Medieval (Version Bicerin 1.1.0) [Data set](https://github.com/HTR-United/cremma-medieval) - CREMMA Medieval Lat (Clérice, T. and Vlachou-Efstathiou, M. (2022). Cremma Medieval Latin [Data set](https://github.com/HTR-United/cremma-medieval-lat) - Eutyches. (Vlachou-Efstathiou, M. Voss.Lat.O.41 - Eutyches "de uerbo" glossed [Data set](https://github.com/malamatenia/Eutyches) - Gallicorpora HTR-Incunable-15e-Siecle ( Pinche, A., Gabay, S., Leroy, N., & Christensen, K. Données HTR incunable du 15e siècle [Computer software](https://github.com/Gallicorpora/HTR-incunable-15e-siecle) - Gallicorpora HTR-MSS-15e-Siecle ( Pinche, A., Gabay, S., Leroy, N., & Christensen, K. Données HTR manuscrits du 15e siècle [Computer software](https://github.com/Gallicorpora/HTR-MSS-15e-Siecle) - Gallicorpora HTR-imprime-gothique-16e-siecle ( Pinche, A., Gabay, S., Vlachou-Efstathiou, M., & Christensen, K. HTR-imprime-gothique-16e-siecle [Computer software](https://github.com/Gallicorpora/HTR-imprime-gothique-16e-siecle) + a few hundred newly annotated data, specifically the test set which is completely novel and based on early prints and manuscripts. These additional annotations were created by correcting an early version of the model developed in the paper using the [roboflow](https://roboflow.com/) platform. ### Curation Rationale [More information needed] ### Source Data The sources of the data are described above. #### Initial Data Collection and Normalization [More information needed] #### Who are the source language producers? [More information needed] ### Annotations #### Annotation process Additional annotations produced for this dataset were created by correcting an early version of the model developed in the paper using the [roboflow](https://roboflow.com/) platform. #### Who are the annotators? [More information needed] ### Personal and Sensitive Information This data does not contain information relating to living individuals. ## Considerations for Using the Data ### Social Impact of Dataset A growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition. ### Discussion of Biases Historical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed. ### Other Known Limitations [More information needed] ## Additional Information ### Dataset Curators ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @dataset{clerice_thibault_2022_6814770, author = {Clérice, Thibault}, title = {{YALTAi: Segmonto Manuscript and Early Printed Book Dataset}}, month = jul, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.6814770}, url = {https://doi.org/10.5281/zenodo.6814770} } ``` [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6814770.svg)](https://doi.org/10.5281/zenodo.6814770) ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
RUCAIBox/Story-Generation
2023-03-03T14:42:27.000Z
[ "task_categories:text-generation", "multilinguality:monolingual", "language:en", "story-generation", "region:us" ]
RUCAIBox
null
null
null
1
3
--- language: - en multilinguality: - monolingual task_categories: - text-generation task_ids: [] tags: - story-generation --- This is the story generation datasets collected by TextBox, including: - ROCStories (roc) - WritingPrompts (wp) - Hippocorpus (hc) - WikiPlots (wikip) - ChangeMyView (cmv). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
jakartaresearch/id-paraphrase-detection
2022-08-14T02:10:33.000Z
[ "task_categories:sentence-similarity", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|msrp", "language:id", "license:cc-by-4.0", "msrp", "id-msrp", "paraphrase-detection", "region:us" ]
jakartaresearch
This dataset is built as a playground for sequence to sequence classification
null
null
3
3
--- annotations_creators: - found language: - id language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Indonesian Paraphrase Detection size_categories: - 1K<n<10K source_datasets: - extended|msrp tags: - msrp - id-msrp - paraphrase-detection task_categories: - sentence-similarity task_ids: [] --- # Dataset Card for Indonesian Sentence Paraphrase Detection ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset is originally from [Microsoft Research Paraphrase Corpus](https://www.microsoft.com/en-us/download/details.aspx?id=52398). We translated the text into Bahasa using google translate. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
Gabriel/pubmed_swe
2022-10-29T11:54:25.000Z
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:10K<n<100K", "source_datasets:https://github.com/huggingface/datasets/tree/master/datasets/pubmed", "language:sv", "license:other", "conditional-text-generation", "region:us" ]
Gabriel
null
null
null
0
3
--- language: - sv license: - other size_categories: - 10K<n<100K source_datasets: - https://github.com/huggingface/datasets/tree/master/datasets/pubmed task_categories: - summarization - text2text-generation task_ids: [] tags: - conditional-text-generation --- # Dataset Card for Swedish pubmed Dataset The Swedish pubmed dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/pubmed ### Data Fields - `document`: a string containing the body of the paper - `summary`: a string containing the abstract of the paper ### Data Splits The Swedish pubmed dataset follows the same splits as the original English version and has 1 splits: _train_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 90,000 |
Yaxin/SemEval2016Task5NLTK
2023-03-19T05:11:38.000Z
[ "region:us" ]
Yaxin
A collection of SemEval2016 specifically designed to aid research in multilingual Aspect Based Sentiment Analysis.
@inproceedings{pontiki2016semeval, title={Semeval-2016 task 5: Aspect based sentiment analysis}, author={Pontiki, Maria and Galanis, Dimitrios and Papageorgiou, Haris and Androutsopoulos, Ion and Manandhar, Suresh and Al-Smadi, Mohammad and Al-Ayyoub, Mahmoud and Zhao, Yanyan and Qin, Bing and De Clercq, Orph{\'e}e and others}, booktitle={International workshop on semantic evaluation}, pages={19--30}, year={2016} }
null
0
3
Entry not found
shwetha729/quantum-machine-learning
2022-08-16T01:08:21.000Z
[ "license:gpl", "region:us" ]
shwetha729
null
null
null
0
3
--- license: gpl --- a continuous data scrape of arxiv and google scholar papers of quantum machine learning papers particularly regarding climate.
copenlu/tydiqa_copenlu
2022-08-16T12:10:21.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:extended|wikipedia", "language:ar", "language:bn", "language:en", "language:fi", "l...
copenlu
null
null
null
0
3
--- pretty_name: TyDi QA annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar - bn - en - fi - id - ja - ko - ru - sw - te - th license: - apache-2.0 multilinguality: - multilingual size_categories: - unknown source_datasets: - extended|wikipedia task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: tydi-qa --- # Dataset Card for "tydiqa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3726.74 MB - **Size of the generated dataset:** 5812.92 MB - **Total amount of disk used:** 9539.67 MB ### Dataset Summary TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### primary_task - **Size of downloaded dataset files:** 1863.37 MB - **Size of the generated dataset:** 5757.59 MB - **Total amount of disk used:** 7620.96 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "annotations": { "minimal_answers_end_byte": [-1, -1, -1], "minimal_answers_start_byte": [-1, -1, -1], "passage_answer_candidate_index": [-1, -1, -1], "yes_no_answer": ["NONE", "NONE", "NONE"] }, "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...", "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร", "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...", "language": "thai", "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...", "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..." } ``` #### secondary_task - **Size of downloaded dataset files:** 1863.37 MB - **Size of the generated dataset:** 55.34 MB - **Total amount of disk used:** 1918.71 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [394], "text": ["بطولتين"] }, "context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...", "id": "arabic-2387335860751143628-1", "question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...", "title": "قائمة نهائيات كأس العالم" } ``` ### Data Fields The data fields are the same among all splits. #### primary_task - `passage_answer_candidates`: a dictionary feature containing: - `plaintext_start_byte`: a `int32` feature. - `plaintext_end_byte`: a `int32` feature. - `question_text`: a `string` feature. - `document_title`: a `string` feature. - `language`: a `string` feature. - `annotations`: a dictionary feature containing: - `passage_answer_candidate_index`: a `int32` feature. - `minimal_answers_start_byte`: a `int32` feature. - `minimal_answers_end_byte`: a `int32` feature. - `yes_no_answer`: a `string` feature. - `document_plaintext`: a `string` feature. - `document_url`: a `string` feature. #### secondary_task - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | train | validation | | -------------- | -----: | ---------: | | primary_task | 166916 | 18670 | | secondary_task | 49881 | 5077 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{tydiqa, title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} year = {2020}, journal = {Transactions of the Association for Computational Linguistics} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
spacemanidol/cc-stories
2023-05-02T11:48:55.000Z
[ "region:us" ]
spacemanidol
CC-Stories (or STORIES) is a dataset for common sense reasoning and language modeling. It was constructed by aggregating documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions in commonsense reasoning tasks. The top 1.0% of highest ranked documents is chosen as the new training corpus.
@article{Trinh2018ASM, title={A Simple Method for Commonsense Reasoning}, author={Trieu H. Trinh and Quoc V. Le}, journal={ArXiv}, year={2018}, volume={abs/1806.02847} }
null
6
3
This is a reproduction of the CC-stories dataset as it has been removed from its original source. To create this reproduction we process the English common crawl and only keep the top 0.1% of documents measured by their ngram overlap with a source document. The source document is created by joining the queries from [PDP-60](https://cs.nyu.edu/~davise/papers/WinogradSchemas/PDPChallenge2016.xml) and [WSC273](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WSCollection.xml). Note, as the original dataset does not mention removing duplicate queries, neither do we. Following the filtering to have top documents we filter to only contain those and produce the dataset which features 2,105,303 lines and 153,176,685 words.
hugginglearners/reddit-depression-cleaned
2022-08-18T04:03:19.000Z
[ "license:cc0-1.0", "region:us" ]
hugginglearners
null
null
null
1
3
--- license: - cc0-1.0 kaggle_id: infamouscoder/depression-reddit-cleaned --- # Dataset Card for Depression: Reddit Dataset (Cleaned) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/infamouscoder/depression-reddit-cleaned - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The raw data is collected through web scrapping Subreddits and is cleaned using multiple NLP techniques. The data is only in English language. It mainly targets mental health classification. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@infamouscoder](https://kaggle.com/infamouscoder) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
hugginglearners/russia-ukraine-conflict-articles
2022-08-18T04:21:16.000Z
[ "license:cc-by-nc-sa-4.0", "region:us" ]
hugginglearners
null
null
null
0
3
--- license: - cc-by-nc-sa-4.0 kaggle_id: hskhawaja/russia-ukraine-conflict --- # Dataset Card for Russia Ukraine Conflict ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/hskhawaja/russia-ukraine-conflict - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ###Context On 24 February 2022, Russia invaded Ukraine in a major escalation of the Russo-Ukrainian War that began in 2014. The invasion caused Europe's largest refugee crisis since World War II, with more than 6.3 million Ukrainians fleeing the country and a third of the population displaced (*Source: Wikipedia*). ###Content This dataset is a collection of 407 news articles from NYT and Guardians related to ongoing conflict between Russia and Ukraine. The publishing date of articles ranges from Feb 1st, 2022 to Jul 31st, 2022. ###What you can do? Here are some ideas to explore: - Discourse analysis of Russia-Ukraine conflict (How the war has evolved over months?) - Identify most talked about issues (refugees, food, weapons, fuel, etc.) - Extract sentiment of articles for both Russia and Ukraine - Which world leaders have tried to become mediators? - Number of supporting countries for both Russia and Ukraine - Map how NATO alliance has been affected by the war I am looking forward to see your work and ideas and will keep adding more ideas to explore. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@hskhawaja](https://kaggle.com/hskhawaja) ### Licensing Information The license for this dataset is cc-by-nc-sa-4.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
hugginglearners/amazon-reviews-sentiment-analysis
2022-08-18T04:28:40.000Z
[ "license:cc-by-nc-sa-4.0", "region:us" ]
hugginglearners
null
null
null
0
3
--- license: - cc-by-nc-sa-4.0 kaggle_id: tarkkaanko/amazon --- # Dataset Card for amazon reviews for sentiment analysis ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/tarkkaanko/amazon - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary One of the most important problems in e-commerce is the correct calculation of the points given to after-sales products. The solution to this problem is to provide greater customer satisfaction for the e-commerce site, product prominence for sellers, and a seamless shopping experience for buyers. Another problem is the correct ordering of the comments given to the products. The prominence of misleading comments will cause both financial losses and customer losses. In solving these 2 basic problems, e-commerce site and sellers will increase their sales, while customers will complete their purchasing journey without any problems. This dataset consists of ranking product ratings and reviews on Amazon. Please review this notebook to observe how I came up with this [dataset](https://www.kaggle.com/code/tarkkaanko/rating-product-sorting-reviews-in-amazon) This dataset containing Amazon Product Data includes product categories and various metadata. ---- ### What is expected of you? The product with the most comments in the electronics category has user ratings and comments. In this way, we expect you to perform sentiment analysis with your specific methods. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@tarkkaanko](https://kaggle.com/tarkkaanko) ### Licensing Information The license for this dataset is cc-by-nc-sa-4.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
hugginglearners/twitter-dataset-tesla
2022-08-18T04:35:32.000Z
[ "license:cc0-1.0", "region:us" ]
hugginglearners
null
null
null
0
3
--- license: - cc0-1.0 kaggle_id: vishesh1412/twitter-dataset-tesla --- # Dataset Card for Twitter Dataset: Tesla ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/vishesh1412/twitter-dataset-tesla - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains all the Tweets regarding #Tesla or #tesla till 12/07/2022 (dd-mm-yyyy). It can be used for sentiment analysis research purpose or used in other NLP tasks or just for fun. It contains 10,000 recent Tweets with the user ID, the hashtags used in the Tweets, and other important features. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@vishesh1412](https://kaggle.com/vishesh1412) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
PlanTL-GOB-ES/WikiCAT_en
2022-11-18T11:50:47.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:automatically-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "language:en", "license:cc-by-sa-3.0", "region:us" ]
PlanTL-GOB-ES
null
null
null
0
3
--- YAML tags: annotations_creators: - automatically-generated language_creators: - found language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual pretty_name: wikicat_en size_categories: - unknown source_datasets: [] task_categories: - text-classification task_ids: - multi-class-classification --- # WikiCAT_en (Text Classification) English dataset ## Dataset Description - **Paper:** - **Point of Contact:** carlos.rodriguez1@bsc.es **Repository** https://github.com/TeMU-BSC/WikiCAT ### Dataset Summary WikiCAT_en is a English corpus for thematic Text Classification tasks. It is created automatically from Wikipedia and Wikidata sources, and contains 28921 article summaries from the Wikiipedia classified under 19 different categories. This dataset was developed by BSC TeMU as part of the PlanTL project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus. ### Supported Tasks and Leaderboards Text classification, Language Model ### Languages EN - English ## Dataset Structure ### Data Instances Two json files, one for each split. ### Data Fields We used a simple model with the article text and associated labels, without further metadata. #### Example: <pre> {"version": "1.1.0", "data": [ { {'sentence': 'The IEEE Donald G. Fink Prize Paper Award was established in 1979 by the board of directors of the Institute of Electrical and Electronics Engineers (IEEE) in honor of Donald G. Fink. He was a past president of the Institute of Radio Engineers (IRE), and the first general manager and executive director of the IEEE. Recipients of this award received a certificate and an honorarium. The award was presented annually since 1981 and discontinued in 2016.', 'label': 'Engineering' }, . . . ] } </pre> #### Labels 'Health', 'Law', 'Entertainment', 'Religion', 'Business', 'Science', 'Engineering', 'Nature', 'Philosophy', 'Economy', 'Sports', 'Technology', 'Government', 'Mathematics', 'Military', 'Humanities', 'Music', 'Politics', 'History' ### Data Splits * hftrain_en.json: 20237 label-document pairs * hfeval_en.json: 8684 label-document pairs ## Dataset Creation ### Methodology Se eligen páginas de partida “Category:” para representar los temas en cada lengua. Se extrae para cada categoría las páginas principales, así como las subcategorías, y las páginas individuales bajo estas subcategorías de primer nivel. Para cada página, se extrae también el “summary” que proporciona Wikipedia. ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The source data are Wikipedia page summaries and thematic categories #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? Automatic annotation ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset [N/A] ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es). For further information, send an email to (plantl-gob-es@bsc.es). This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx). ### Licensing information This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License. Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Contributions [N/A]
npc-engine/light-batch-summarize-dialogue
2022-08-20T18:18:10.000Z
[ "language:en", "license:mit", "region:us" ]
npc-engine
null
null
null
3
3
--- license: mit language: en --- # [Light dataset](https://parl.ai/projects/light/) prepared for zero-shot summarization. Dialogues are preprocessed into a form: ``` <Character name>: <character line> ... <Character name>: <character line> Summarize the document ```
tartuNLP/EstCOPA
2022-10-31T10:17:40.000Z
[ "task_categories:question-answering", "annotations_creators:expert-generated", "language_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "multilinguality:translation", "size_categories:n<1K", "source_datasets:extended|xcopa", "language:et", "licens...
tartuNLP
null
null
null
1
3
--- annotations_creators: - expert-generated language: - et language_creators: - expert-generated - machine-generated license: - cc-by-4.0 multilinguality: - monolingual - translation pretty_name: EstCOPA size_categories: - n<1K source_datasets: - extended|xcopa tags: [] task_categories: - question-answering task_ids: [] --- # Dataset Card for EstCOPA ### Dataset Summary EstCOPA is an extended version of [XCOPA](https://huggingface.co/datasets/xcopa) that was created with a goal to further investigate Estonian language understanding of large language models. EstCOPA provides two new versions of train, eval and test datasets in Estonian: firstly, a machine translated (En->Et) version of original English COPA ([Roemmele et al., 2011](http://commonsensereasoning.org/2011/papers/Roemmele.pdf)) and secondly, a manually post-edited version of the same machine translated data. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - et ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information If you use the dataset in your work, please cite ``` @article{kuulmets_estcopa_2022, title={Estonian Language Understanding: a Case Study on the COPA Task}, volume={10}, DOI={https://doi.org/10.22364/bjmc.2022.10.3.19}, number={3}, journal={Baltic Journal of Modern Computing}, author={Kuulmets, Hele-Andra and Tättar, Andre and Fishel, Mark}, year={2022}, pages={470–480} } ``` ### Contributions Thanks to [@helehh](https://github.com/helehh) for adding this dataset.
UKPLab/TexPrax
2023-01-11T14:40:21.000Z
[ "license:cc-by-nc-4.0", "arxiv:2208.07846", "region:us" ]
UKPLab
This dataset was collected in the [TexPrax](https://texprax.de/) project and contains named entities annotated by three researchers as well as annotated sentences (problem/P, cause/C, solution/S, and other/O).
@inproceedings{stangier-etal-2022-texprax, title = "{T}ex{P}rax: A Messaging Application for Ethical, Real-time Data Collection and Annotation", author = {Stangier, Lorenz and Lee, Ji-Ung and Wang, Yuxi and M{\"u}ller, Marvin and Frick, Nicholas and Metternich, Joachim and Gurevych, Iryna}, booktitle = "Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: System Demonstrations", month = nov, year = "2022", address = "Taipei, Taiwan", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.aacl-demo.2", pages = "9--16", }
null
0
3
--- license: cc-by-nc-4.0 --- # Dataset Card for TexPrax ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://texprax.de/** - **Repository: https://github.com/UKPLab/TexPrax** - **Paper: https://arxiv.org/abs/2208.07846** - **Leaderboard: n/a** - **Point of Contact: Ji-Ung Lee (http://www.ukp.tu-darmstadt.de/)** ### Dataset Summary This dataset contains dialogues collected from German factory workers at the _Center for industrial productivity_ ([CiP](https://www.prozesslernfabrik.de/)). The dialogues mostly concern issues workers encounter during their daily work, such as machines breaking down, material missing, etc. The dialogues are further expert-annotated on a sentence level (problem, cause, solution, other) for sentence classification and on a token level for named entity recognition using a BIO tagging scheme. Note, that the dataset was collected in three rounds, each around one year apart. Here, we provide the data only split into train and test data where the test data was collected at the last round (July 2022). Additionally, the data from the first round is split into two subdomains, industry 4.0 (industrie) and machining (zerspanung). The splits were made according to the respective groups of people working at different assembly lines in the factory. ### Supported Tasks and Leaderboards This dataset supports the following tasks: * Sentence classification * Named entity recognition (will be updated soon with the new indexing) * Dialog generation (so far not evaluated) ### Languages German ## Dataset Structure ### Data Instances On sentence level, each instance consists of the dialog-id, turn-id, sentence-id, the sentence (raw), the label, the domain, and the subsplit. ``` {"185";"562";993";"wie kriege ich die Dichtung raus?";"P";"n/a";"3"} ``` On token level, each instance consists of a unique identifier, a list of tokens containing the whole dialog, the list of labels (bio-tagged entities), and the subsplit. ``` {"178_0";"['Hi', 'wie', 'kriege', 'ich', 'die', 'Dichtung', 'raus', '?', 'in', 'der', 'Schublade', 'gibt', 'es', 'einen', 'Dichtungszieher']";"['O', 'O', 'O', 'O', 'O', 'B-PRE', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'O', 'O', 'B-PE']";"Batch 3"} ``` ### Data Fields Sentence level: * dialog-id: unique identifier for the dialog * turn-id: unique identifier for the turn * sentence-id: unique identifier for the dialog * sentence: the respective sentence * label: the label (_P_ for Problem, _C_ for Cause, _S_ for solution, and _O_ for Other) * domain: the subdomains where the data was collected from. Domains are industry, machining, or n/a (for batch 2 and batch 3). * subsplit: the respective subsplit of the data (see below) Token level: * id: the identifier * tokens: a list of tokens (i.e., the tokenized dialogue) * entities: the named entity in a BIO scheme (_B-X_, _I-X_, or O). * subsplit: the respective subsplit of the data (see below) ### Data Splits The dataset is split into train and test splits, but contains further subsplits (subsplit column). Note, that the splits are collected at different times with some turnaround in the workforce. Hence, later data (especially the data from batch 2) contains more turns (due to increased search for a cause) as more inexperienced workers who newly joined were employed in the factory. Train: * Batch 1 industrie: data collected in October 2020 from workers in the industry 4.0 assembly line * Batch 1 zerspanung: data collected in October 2020 from workers in the machining assembly line * Batch 2: data collected in-between October 2021-June 2022 from all workers Test: * Batch 3: data collected in July 2022 together with the system usability study run Sentence level statistics: | Batch | Dialogues | Turns | Sentences | |---|---|---|---| | 1 | 81 | 246 | 553 | | 2 | 97 | 309 | 432 | | 3 | 24 | 36 | 42 | | Overall | 202 | 591 | 1,027 | Token level statistics: [Needs to be added] ## Dataset Creation ### Curation Rationale This dataset provides task-oriented dialogues that solve a very domain specific problem. ### Source Data #### Initial Data Collection and Normalization The data was generated by workers at the [CiP](https://www.prozesslernfabrik.de/). The data was collected in three rounds (October 2020, October 2021-June 2022, July 2022). As the dialogues occurred during their daily work, one distinct property of the dataset is that all dialogues are very informal 'ne', contain abbreviations 'vll', and filler words such as 'ah'. For a detailed description please see the [paper](https://arxiv.org/abs/2208.07846). #### Who are the source language producers? German factory workers working at the [CiP](https://www.prozesslernfabrik.de/) ### Annotations #### Annotation process **Token level.** Token level annotation was done by researchers who are responsible for supervising and teaching workers at the CiP. The data was first split into three parts, each annotated by one researcher. Next, each researcher cross-examined the other researchers' annotations. If there were disagreements, all three researchers discussed the final label. **Sentence level.** Sentence level annotations were collected from the factory workers who also generated the dialogues. For details about the data collection, please see the [TexPrax demo paper](https://arxiv.org/abs/2208.07846). #### Who are the annotators? **Token level.** Researchers working at the CiP. **Sentence level.** The factory workers themselves. ### Personal and Sensitive Information This dataset is fully anonymized. All occurrences of names have been manually checked during annotation and replaced with a random token. ## Considerations for Using the Data ### Social Impact of Dataset Informal language especially used in short messages, however, seldom considered in existing NLP datasets. This dataset could serve as an interesting evaluation task for transferring language models to low-resource, but highly specific domains. Moreover, we note that despite all abbreviations, typos, and local dialects used in the messages, all workers were able to understand the questions as well as replies. This should be a standard future NLP models should be able to uphold. ### Discussion of Biases The dialogues are very much on a professional level. The workers were informed (and gave their consent) in advance that their messages are being recorded and processed, which may have influenced them to hold only professional conversations, hence, all dialogues concern inanimate objects (i.e., machines). ### Other Known Limitations [More Information Needed] ## Additional Information You can download the data via: ``` from datasets import load_dataset dataset = load_dataset("UKPLab/TexPrax") # default config is sentence classification dataset = load_dataset("UKPLab/TexPrax", "ner") # use the ner tag for named entity recognition ``` Please find more information about the code and how the data was collected on [GitHub](https://github.com/UKPLab/TexPrax). ### Dataset Curators Curation is managed by our [data manager](https://www.informatik.tu-darmstadt.de/ukp/research_ukp/ukp_research_data_and_software/ukp_data_and_software.en.jsp) at UKP. ### Licensing Information [CC-by-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) ### Citation Information Please cite this data using: ``` @article{stangier2022texprax, title={TexPrax: A Messaging Application for Ethical, Real-time Data Collection and Annotation}, author={Stangier, Lorenz and Lee, Ji-Ung and Wang, Yuxi and M{\"u}ller, Marvin and Frick, Nicholas and Metternich, Joachim and Gurevych, Iryna}, journal={arXiv preprint arXiv:2208.07846}, year={2022} } ``` ### Contributions Thanks to [@Wuhn](https://github.com/Wuhn) for adding this dataset. ## Tags annotations_creators: - expert-generated language: - de language_creators: - expert-generated license: - cc-by-nc-4.0 multilinguality: - monolingual pretty_name: TexPrax-Conversations size_categories: - n<1K - 1K<n<10K source_datasets: - original tags: - dialog - expert to expert conversations - task-oriented task_categories: - token-classification - text-classification task_ids: - named-entity-recognition - multi-class-classification
jonathanli/eurlex
2022-10-24T15:26:49.000Z
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "legal-topic-classification", "re...
jonathanli
EURLEX57K contains 57k legislative documents in English from EUR-Lex portal, annotated with EUROVOC concepts.
@inproceedings{chalkidis-etal-2019-large, title = "Large-Scale Multi-Label Text Classification on {EU} Legislation", author = "Chalkidis, Ilias and Fergadiotis, Emmanouil and Malakasiotis, Prodromos and Androutsopoulos, Ion", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1636", doi = "10.18653/v1/P19-1636", pages = "6314--6322" }
null
0
3
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - multi-label-classification paperswithcode_id: eurlex57k pretty_name: the EUR-Lex dataset tags: - legal-topic-classification --- # Dataset Card for the EUR-Lex dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/ - **Repository:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/ - **Paper:** https://www.aclweb.org/anthology/P19-1636/ - **Leaderboard:** N/A ### Dataset Summary EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old. EURLEX57K contains 57k legislative documents in English from EUR-Lex (https://eur-lex.europa.eu) with an average length of 727 words. Each document contains four major zones: - the header, which includes the title and name of the legal body enforcing the legal act; - the recitals, which are legal background references; and - the main body, usually organized in articles. **Labeling / Annotation** All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/). While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Supported Tasks and Leaderboards The dataset supports: **Multi-label Text Classification:** Given the text of a document, a model predicts the relevant EUROVOC concepts. **Few-shot and Zero-shot learning:** As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Languages All documents are written in English. ## Dataset Structure ### Data Instances ```json { "celex_id": "31979D0509", "title": "79/509/EEC: Council Decision of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain", "text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,", "eurovoc_concepts": ["192", "2356", "2560", "862", "863"] } ``` ### Data Fields The following data fields are provided for documents (`train`, `dev`, `test`): `celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\ `title`: (**str**) The title of the document.\ `text`: (**str**) The full content of each document, which is represented by its `header`, `recitals` and `main_body`.\ `eurovoc_concepts`: (**List[str]**) The relevant EUROVOC concepts (labels). If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: https://archive.org/download/EURLEX57K/eurovoc_concepts.jsonl ```python import json with open('./eurovoc_concepts.jsonl') as jsonl_file: eurovoc_concepts = {json.loads(concept) for concept in jsonl_file.readlines()} ``` ### Data Splits | Split | No of Documents | Avg. words | Avg. labels | | ------------------- | ------------------------------------ | --- | --- | | Train | 45,000 | 729 | 5 | |Development | 6,000 | 714 | 5 | |Test | 6,000 | 725 | 5 | ## Dataset Creation ### Curation Rationale The dataset was curated by Chalkidis et al. (2019).\ The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en). ### Source Data #### Initial Data Collection and Normalization The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed format. The documents were downloaded from EUR-Lex portal in HTML format. The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql). #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process * The original documents are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed HTML format. The HTML code was striped and the documents split into sections. * The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en). #### Who are the annotators? Publications Office of EU (https://publications.europa.eu/en) ### Personal and Sensitive Information The dataset does not include personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Chalkidis et al. (2019) ### Licensing Information © European Union, 1998-2021 The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes. The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \ Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html ### Citation Information *Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.* *Large-Scale Multi-Label Text Classification on EU Legislation.* *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019* ``` @inproceedings{chalkidis-etal-2019-large, title = "Large-Scale Multi-Label Text Classification on {EU} Legislation", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Androutsopoulos, Ion", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1636", doi = "10.18653/v1/P19-1636", pages = "6314--6322" } ``` ### Contributions Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
Lo/adapt-pre-trained-VL-models-to-text-data-LXMERT-finetune
2022-08-29T08:31:45.000Z
[ "multilinguality:monolingual", "language:en", "license:mit", "region:us" ]
Lo
null
null
null
0
3
--- language: - en license: - mit multilinguality: - monolingual --- The LXMERT text finetune data used to train visual features for the adaption of vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the data made available by the [LXMERT repo](https://github.com/airsplay/lxmert).
mschi/blogspot_raw
2022-09-13T08:48:23.000Z
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_categories:text-generation", "task_categories:time-series-forecasting", "language_creators:other", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:mit", "...
mschi
null
null
null
0
3
--- annotations_creators: [] language: - en language_creators: - other license: - mit multilinguality: - monolingual pretty_name: Blogspot_raw_texts size_categories: - 1M<n<10M source_datasets: - original tags: - blogspot - blogger - texts task_categories: - text-classification - text-retrieval - text-generation - time-series-forecasting task_ids: [] --- # Dataset Card for blogspot raw dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is a corpus of raw blogposts from [blogspot](https://blogger.com) mostly in the English language. It was obtained by scraping corpora of [webarchive](https://archive.org) and [commoncrawl](https://commoncrawl.org). ### Supported Tasks and Leaderboards The dataset may be used for training language models or serve other research interests. ### Languages Mostly English language, but some outliers may occur. ## Dataset Structure [Distribution](https://huggingface.co/datasets/mschi/blogspot_raw/blob/main/blospot_comm_dist.png) The distribution of the blog posts over time can be viewed at ./blogspot_dist_comm.png ### Data Instances [More Information Needed] ### Data Fields text: string URL: string date: string comment: int ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale The dataset was constructed by utilizing the [WARC-dl pipeline](https://github.com/webis-de/web-archive-keras). It was executed on cluster architecture. The corpora of archive.org and commoncrawl.org contain WARC files that contain HTML which gets parsed by the pipeline. The pipeline extracts HTML from the WARC files and applies distributed filtering to efficiently filter for the desired content. ### Source Data #### Initial Data Collection and Normalization The corpora "corpus-commoncrawl-main-2022-05" and "corpus-iwo-internet-archive-wide00001" have been searched for the content present in this dataset. Search terms have been inserted into the preciously mentioned pipeline to filter URLs for "blogspot.com" and characteristic timestamp information contained in the URL (e.g. "/01/2007"). The HTML documents were parsed for specific tags to obtain the timestamps. Further, the data was labeled with the "comment" label if there were some comment markers in the URL, indicating that the retrieved text is from the main text of a blog post or from the comments section. The texts are stored raw and no further processing has been done. #### Who are the source language producers? Since [blogspot](https://blogger.com) provides a high-level framework to allow people everywhere in the world to set up and maintain a blog, the producers of the texts may not be further specified. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information Texts are raw and unfiltered, thus personal and sensitive information, as well as explicit language, may be present in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The retrieval of the timestamps from the HTML documents was not 100% accurate, so a small proportion of wrong or nonsense timestamps can be present in the data. Also we can not guarantee the correctness of the timestamps as well as the "comment" labels. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was constructed during the course "Big Data and Language Technologies" of the Text Mining and Retrieval Group, Department of Computer Science at the University of Leipzig. ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@jonaskonig](https://github.com/jonaskonig), [@maschirmer](https://github.com/maschirmer) and [@1BlattPapier](https://github.com/1BlattPapier) for contributing.
ShapeNet/shapenetcore-gltf
2023-09-20T15:03:13.000Z
[ "language:en", "license:other", "3D shapes", "region:us" ]
ShapeNet
null
null
null
0
3
--- language: - en pretty_name: ShapeNetCore tags: - 3D shapes license: other extra_gated_heading: Acknowledge license to accept the repository extra_gated_prompt: >- To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the **school or company** that you are affiliated with (the **Affiliation** field). After requesting access to this ShapeNet repo, you will be considered for access approval. After access approval, you (the "Researcher") receive permission to use the ShapeNet database (the "Database") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions: Researcher shall use the Database only for non-commercial research and educational purposes. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. The law of the State of New Jersey shall apply to all disputes under this agreement. For access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with. Please actually fill out the fields (DO NOT put the word "Advisor" for PI/Advisor and the word "School" for "Affiliation", please specify the name of your advisor and the name of your school). extra_gated_fields: Name: text PI/Advisor: text Affiliation: text Purpose: text Country: text I agree to use this dataset for non-commercial use ONLY: checkbox --- This repository contains ShapeNetCore (v2) in [GLTF](https://en.wikipedia.org/wiki/GlTF) format, a subset of [ShapeNet](https://shapenet.org). ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in [WordNet 3.0](https://wordnet.princeton.edu/). If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions. If you use this data, please cite the main ShapeNet technical report. ``` @techreport{shapenet2015, title = {{ShapeNet: An Information-Rich 3D Model Repository}}, author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher}, number = {arXiv:1512.03012 [cs.GR]}, institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago}, year = {2015} } ``` For more information, please contact us at shapenetwebmaster@gmail.com and indicate ShapeNetCore v2 in the title of your email.
clips/VaccinChatNL
2023-03-21T15:22:36.000Z
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:nl", "license:cc-by-4.0", "covid-19", "FAQ", "question-a...
clips
null
null
null
0
3
--- annotations_creators: - expert-generated language: - nl language_creators: - other license: - cc-by-4.0 multilinguality: - monolingual pretty_name: VaccinChatNL size_categories: - 1K<n<10K source_datasets: - original tags: - covid-19 - FAQ - question-answer pairs task_categories: - text-classification task_ids: - intent-classification --- # Dataset Card for VaccinChatNL ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) <!-- - [Curation Rationale](#curation-rationale) --> <!-- - [Source Data](#source-data) --> - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) <!-- - [Social Impact of Dataset](#social-impact-of-dataset) --> - [Discussion of Biases](#discussion-of-biases) <!-- - [Other Known Limitations](#other-known-limitations) --> - [Additional Information](#additional-information) <!-- - [Dataset Curators](#dataset-curators) --> <!-- - [Licensing Information](#licensing-information) --> - [Citation Information](#citation-information) <!-- - [Contributions](#contributions) --> ## Dataset Description <!-- - **Homepage:** - **Repository:** - **Paper:** [To be added] - **Leaderboard:** --> - **Point of Contact:** [Jeska Buhmann](mailto:jeska.buhmann@uantwerpen.be) ### Dataset Summary VaccinChatNL is a Flemish Dutch FAQ dataset on the topic of COVID-19 vaccinations in Flanders. It consists of 12,833 user questions divided over 181 answer labels, thus providing large groups of semantically equivalent paraphrases (a many-to-one mapping of user questions to answer labels). VaccinChatNL is the first Dutch many-to-one FAQ dataset of this size. ### Supported Tasks and Leaderboards - 'text-classification': the dataset can be used to train a classification model for Dutch frequently asked questions on the topic of COVID-19 vaccination in Flanders. ### Languages Dutch (Flemish): the BCP-47 code for Dutch as generally spoken in Flanders (Belgium) is nl-BE. ## Dataset Structure ### Data Instances For each instance, there is a string for the user question and a string for the label of the annotated answer. See the [CLiPS / VaccinChatNL dataset viewer](https://huggingface.co/datasets/clips/VaccinChatNL/viewer/clips--VaccinChatNL/train). ``` {"sentence1": "Waar kan ik de bijsluiters van de vaccins vinden?", "label": "faq_ask_bijsluiter"} ``` ### Data Fields - `sentence1`: a string containing the user question - `label`: a string containing the name of the intent (the answer class) ### Data Splits The VaccinChatNL dataset has 3 splits: _train_, _valid_, and _test_. Below are the statistics for the dataset. | Dataset Split | Number of Labeled User Questions in Split | | ------------- | ------------------------------------------ | | Train | 10,542 | | Validation | 1,171 | | Test | 1,170 | ## Dataset Creation <!-- ### Curation Rationale [More Information Needed] --> <!-- ### Source Data [Perhaps a link to vaccinchat.be and some of the website that were used for information] --> <!-- #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] --> ### Annotations #### Annotation process Annotation was an iterative semi-automatic process. Starting from a very limited dataset with approximately 50 question-answer pairs (_sentence1-label_ pairs) a text classification model was trained and implemented in a publicly available chatbot. When the chatbot was used, the predicted labels for the new questions were checked and corrected if necessary. In addition, new answers were added to the dataset. After each round of corrections, the model was retrained on the updated dataset. This iterative approach led to the final dataset containing 12,883 user questions divided over 181 answer labels. #### Who are the annotators? The VaccinChatNL data were annotated by members and students of [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/). All annotators have a background in Computational Linguistics. ### Personal and Sensitive Information The data are anonymized in the sense that a user question can never be traced back to a specific individual. ## Considerations for Using the Data <!-- ### Social Impact of Dataset [More Information Needed] --> ### Discussion of Biases This dataset contains real user questions, including a rather large section (7%) of out-of-domain questions or remarks (_label: nlu_fallback_). This class of user questions consists of ununderstandable questions, but also jokes and insulting remarks. <!-- ### Other Known Limitations [Perhaps some information of % of exact overlap between train and test set] --> ## Additional Information <!-- ### Dataset Curators [More Information Needed] --> <!-- ### Licensing Information [More Information Needed] --> ### Citation Information ``` @inproceedings{buhmann-etal-2022-domain, title = "Domain- and Task-Adaptation for {V}accin{C}hat{NL}, a {D}utch {COVID}-19 {FAQ} Answering Corpus and Classification Model", author = "Buhmann, Jeska and De Bruyn, Maxime and Lotfi, Ehsan and Daelemans, Walter", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.312", pages = "3539--3549" } ``` <!-- ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. -->
mrm8488/sst2-es-mt
2022-09-03T16:41:42.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:sst2", "language:es", "license:unknown", "region:us" ]
mrm8488
null
null
null
0
3
--- language: - es license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - sst2 task_categories: - text-classification task_ids: - sentiment-classification pretty_name: Stanford Sentiment Treebank v2 --- # STT-2 Spanish ## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [SST-2 Dataset](https://huggingface.co/datasets/sst2) #### For more information check the official [Model Card](https://huggingface.co/datasets/sst2)
Luciano/lener_br_text_to_lm
2022-09-04T11:32:31.000Z
[ "task_categories:fill-mask", "task_categories:text-generation", "task_ids:masked-language-modeling", "task_ids:language-modeling", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:pt", "region:us" ]
Luciano
null
null
null
0
3
--- annotations_creators: [] language: - pt language_creators: [] license: [] multilinguality: - monolingual pretty_name: 'The LeNER-Br language modeling dataset is a collection of legal texts in Portuguese from the LeNER-Br dataset (https://cic.unb.br/~teodecampos/LeNER-Br/). The legal texts were obtained from the original token classification Hugging Face LeNER-Br dataset (https://huggingface.co/datasets/lener_br) and processed to create a DatasetDict with train and validation dataset (20%). The LeNER-Br language modeling dataset allows the finetuning of language models as BERTimbau base and large.' size_categories: - 10K<n<100K source_datasets: [] tags: [] task_categories: - fill-mask - text-generation task_ids: - masked-language-modeling - language-modeling --- # Dataset Card for lener_br_text_to_lm ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The LeNER-Br language modeling dataset is a collection of legal texts in Portuguese from the LeNER-Br dataset (https://cic.unb.br/~teodecampos/LeNER-Br/). The legal texts were obtained from the original token classification Hugging Face LeNER-Br dataset (https://huggingface.co/datasets/lener_br) and processed to create a DatasetDict with train and validation dataset (20%). The LeNER-Br language modeling dataset allows the finetuning of language models as BERTimbau base and large. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ``` DatasetDict({ train: Dataset({ features: ['text'], num_rows: 8316 }) test: Dataset({ features: ['text'], num_rows: 2079 }) }) ``` ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
batterydata/battery-device-data-qa
2022-09-05T15:54:40.000Z
[ "task_categories:question-answering", "language:en", "license:apache-2.0", "region:us" ]
batterydata
null
null
null
0
3
--- language: - en license: - apache-2.0 task_categories: - question-answering pretty_name: 'Battery Device Question Answering Dataset' --- # Battery Device QA Data Battery device records, including anode, cathode, and electrolyte. Examples of the question answering evaluation dataset: \{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\} \{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\} \{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\} # Usage ``` from datasets import load_dataset dataset = load_dataset("batterydata/battery-device-data-qa") ``` # Citation ``` @article{huang2022batterybert, title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement}, author={Huang, Shu and Cole, Jacqueline M}, journal={J. Chem. Inf. Model.}, year={2022}, doi={10.1021/acs.jcim.2c00035}, url={DOI:10.1021/acs.jcim.2c00035}, pages={DOI: 10.1021/acs.jcim.2c00035}, publisher={ACS Publications} } ```
poojaruhal/Code-comment-classification
2022-10-16T11:11:46.000Z
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:...
poojaruhal
null
null
null
2
3
--- annotations_creators: - expert-generated language: - en language_creators: - crowdsourced license: - cc-by-nc-sa-4.0 multilinguality: - monolingual pretty_name: 'Code-comment-classification ' size_categories: - 1K<n<10K source_datasets: - original tags: - '''source code comments''' - '''java class comments''' - '''python class comments''' - ''' smalltalk class comments''' task_categories: - text-classification task_ids: - intent-classification - multi-label-classification --- # Dataset Card for Code Comment Classification ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/poojaruhal/RP-class-comment-classification - **Repository:** https://github.com/poojaruhal/RP-class-comment-classification - **Paper:** https://doi.org/10.1016/j.jss.2021.111047 - **Point of Contact:** https://poojaruhal.github.io ### Dataset Summary The dataset contains class comments extracted from various big and diverse open-source projects of three programming languages Java, Smalltalk, and Python. ### Supported Tasks and Leaderboards Single-label text classification and Multi-label text classification ### Languages Java, Python, Smalltalk ## Dataset Structure ### Data Instances ```json { "class" : "Absy.java", "comment":"* Azure Blob File System implementation of AbstractFileSystem. * This impl delegates to the old FileSystem", "summary":"Azure Blob File System implementation of AbstractFileSystem.", "expand":"This impl delegates to the old FileSystem", "rational":"", "deprecation":"", "usage":"", "exception":"", "todo":"", "incomplete":"", "commentedcode":"", "directive":"", "formatter":"", "license":"", "ownership":"", "pointer":"", "autogenerated":"", "noise":"", "warning":"", "recommendation":"", "precondition":"", "codingGuidelines":"", "extension":"", "subclassexplnation":"", "observation":"", } ``` ### Data Fields class: name of the class with the language extension. comment: class comment of the class categories: a category that sentence is classified to. It indicated a particular type of information. ### Data Splits 10-fold cross validation ## Dataset Creation ### Curation Rationale To identify the infomation embedded in the class comments across various projects and programming languages. ### Source Data #### Initial Data Collection and Normalization It contains the dataset extracted from various open-source projects of three programming languages Java, Smalltalk, and Python. - #### Java Each file contains all the extracted class comments from one project. We have a total of six java projects. We chose a sample of 350 comments from all these files for our experiment. - [Eclipse.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/) - Extracted class comments from the Eclipse project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Eclipse](https://github.com/eclipse). - [Guava.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Guava.csv) - Extracted class comments from the Guava project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Guava](https://github.com/google/guava). - [Guice.csv](/https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Guice.csv) - Extracted class comments from the Guice project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Guice](https://github.com/google/guice). - [Hadoop.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Hadoop.csv) - Extracted class comments from the Hadoop project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Apache Hadoop](https://github.com/apache/hadoop) - [Spark.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Spark.csv) - Extracted class comments from the Apache Spark project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Apache Spark](https://github.com/apache/spark) - [Vaadin.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Vaadin.csv) - Extracted class comments from the Vaadin project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Vaadin](https://github.com/vaadin/framework) - [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Parser_Details.md) - Details of the parser used to parse class comments of Java [ Projects](https://doi.org/10.5281/zenodo.4311839) - #### Smalltalk/ Each file contains all the extracted class comments from one project. We have a total of seven Pharo projects. We chose a sample of 350 comments from all these files for our experiment. - [GToolkit.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/GToolkit.csv) - Extracted class comments from the GToolkit project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Moose.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Moose.csv) - Extracted class comments from the Moose project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [PetitParser.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/PetitParser.csv) - Extracted class comments from the PetitParser project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Pillar.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Pillar.csv) - Extracted class comments from the Pillar project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [PolyMath.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/PolyMath.csv) - Extracted class comments from the PolyMath project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Roassal2.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Roassal2.csv) -Extracted class comments from the Roassal2 project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Seaside.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Seaside.csv) - Extracted class comments from the Seaside project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Parser_Details.md) - Details of the parser used to parse class comments of Pharo [ Projects](https://doi.org/10.5281/zenodo.4311839) - #### Python/ Each file contains all the extracted class comments from one project. We have a total of seven Python projects. We chose a sample of 350 comments from all these files for our experiment. - [Django.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Django.csv) - Extracted class comments from the Django project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Django](https://github.com/django) - [IPython.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/IPython.csv) - Extracted class comments from the Ipython project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub[IPython](https://github.com/ipython/ipython) - [Mailpile.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Mailpile.csv) - Extracted class comments from the Mailpile project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Mailpile](https://github.com/mailpile/Mailpile) - [Pandas.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pandas.csv) - Extracted class comments from the Pandas project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [pandas](https://github.com/pandas-dev/pandas) - [Pipenv.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pipenv.csv) - Extracted class comments from the Pipenv project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Pipenv](https://github.com/pypa/pipenv) - [Pytorch.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pytorch.csv) - Extracted class comments from the Pytorch project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [PyTorch](https://github.com/pytorch/pytorch) - [Requests.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Requests.csv) - Extracted class comments from the Requests project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Requests](https://github.com/psf/requests/) - [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Parser_Details.md) - Details of the parser used to parse class comments of Python [ Projects](https://doi.org/10.5281/zenodo.4311839) ### Annotations #### Annotation process Four evaluators (all authors of this paper (https://doi.org/10.1016/j.jss.2021.111047)), each having at least four years of programming experience, participated in the annonation process. We partitioned Java, Python, and Smalltalk comments equally among all evaluators based on the distribution of the language's dataset to ensure the inclusion of comments from all projects and diversified lengths. Each classification is reviewed by three evaluators. The details are given in the paper [Rani et al., JSS, 2021](https://doi.org/10.1016/j.jss.2021.111047) #### Who are the annotators? [Rani et al., JSS, 2021](https://doi.org/10.1016/j.jss.2021.111047) ### Personal and Sensitive Information Author information embedded in the text ## Additional Information ### Dataset Curators [Pooja Rani, Ivan, Manuel] ### Licensing Information [license: cc-by-nc-sa-4.0] ### Citation Information ``` @article{RANI2021111047, title = {How to identify class comment types? A multi-language approach for class comment classification}, journal = {Journal of Systems and Software}, volume = {181}, pages = {111047}, year = {2021}, issn = {0164-1212}, doi = {https://doi.org/10.1016/j.jss.2021.111047}, url = {https://www.sciencedirect.com/science/article/pii/S0164121221001448}, author = {Pooja Rani and Sebastiano Panichella and Manuel Leuenberger and Andrea {Di Sorbo} and Oscar Nierstrasz}, keywords = {Natural language processing technique, Code comment analysis, Software documentation} } ```
CShorten/CORD19-Chunk-2
2022-09-09T14:58:11.000Z
[ "license:afl-3.0", "region:us" ]
CShorten
null
null
null
0
3
--- license: afl-3.0 ---
biu-nlp/Controlled-Text-Reduction-dataset
2022-10-25T13:25:49.000Z
[ "arxiv:2210.13449", "region:us" ]
biu-nlp
The dataset contains document-summary pairs with document spans (referred to as "highlights"), indicating the "pre-selected" spans that lead to the creation of the summary. The evaluation and test datasets were constructed via controlled crowdsourcing. The train datasets were automatically generated using the summary-source proposition-level alignment model SuperPAL (Ernst et al., 2021).
""" # _CITATION =
null
1
3
# Controlled Text Reduction This dataset contains Controlled Text Reduction triplets - document-summary pairs, and the spans in the document that cover the summary. The task input is consists of a document with pre-selected spans in it ("highlights"). The output is a text covering all and only the highlighted content. The script downloads the data from the original [GitHub repository](https://github.com/lovodkin93/Controlled_Text_Reduction). ### Format The dataset contains the following important features: * `doc_text` - the input text. * `summary_text` - the output text. * `highlight_spans` - the spans in the input text (the doc_text) that lead to the output text (the summary_text). ```json {'doc_text': 'The motion picture industry\'s most coveted award...with 32.', 'summary_text': 'The Oscar, created 60 years ago by MGM...awarded person (32).', 'highlight_spans':'[[0, 48], [50, 55], [57, 81], [184, 247], ..., [953, 975], [1033, 1081]]'} ``` where for each document-summary pair, we save the spans in the input document that lead to the summary. Notice that the dataset consists of two subsets: 1. `DUC-2001-2002` - which is further divided into 3 splits (train, validation and test). 2. `CNN-DM` - which has a single split. Citation ======== If you find the Controlled Text Reduction dataset useful in your research, please cite the following paper: ``` @misc{https://doi.org/10.48550/arxiv.2210.13449, doi = {10.48550/ARXIV.2210.13449}, url = {https://arxiv.org/abs/2210.13449}, author = {Slobodkin, Aviv and Roit, Paul and Hirsch, Eran and Ernst, Ori and Dagan, Ido}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Controlled Text Reduction}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Zero v1.0 Universal} } ```
yuntian-deng/im2latex-animals-200k
2022-09-13T04:50:05.000Z
[ "region:us" ]
yuntian-deng
null
null
null
0
3
Entry not found
codesue/kelly
2022-12-18T22:06:55.000Z
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:sv", "license:cc-by-4.0", "lexicon", "swedish", "CEFR", "region:us" ]
codesue
The Swedish Kelly list is a freely available frequency-based vocabulary list that comprises general-purpose language of modern Swedish. The list was generated from a large web-acquired corpus (SweWaC) of 114 million words dating from the 2010s. It is adapted to the needs of language learners and contains 8,425 most frequent lemmas that cover 80% of SweWaC.\
@article{Kilgarriff2013, doi = {10.1007/s10579-013-9251-2}, url = {https://doi.org/10.1007/s10579-013-9251-2}, year = {2013}, month = sep, publisher = {Springer Science and Business Media {LLC}}, volume = {48}, number = {1}, pages = {121--163}, author = {Adam Kilgarriff and Frieda Charalabopoulou and Maria Gavrilidou and Janne Bondi Johannessen and Saussan Khalil and Sofie Johansson Kokkinakis and Robert Lew and Serge Sharoff and Ravikiran Vadlapudi and Elena Volodina}, title = {Corpus-based vocabulary lists for language learners for nine languages}, journal = {Language Resources and Evaluation} }
null
0
3
--- annotations_creators: - expert-generated language: - sv language_creators: - expert-generated license: - cc-by-4.0 multilinguality: - monolingual pretty_name: kelly size_categories: - 1K<n<10K source_datasets: [] tags: - lexicon - swedish - CEFR task_categories: - text-classification task_ids: - text-scoring --- # Dataset Card for Kelly Keywords for Language Learning for Young and adults alike ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://spraakbanken.gu.se/en/resources/kelly - **Paper:** https://link.springer.com/article/10.1007/s10579-013-9251-2 ### Dataset Summary The Swedish Kelly list is a freely available frequency-based vocabulary list that comprises general-purpose language of modern Swedish. The list was generated from a large web-acquired corpus (SweWaC) of 114 million words dating from the 2010s. It is adapted to the needs of language learners and contains 8,425 most frequent lemmas that cover 80% of SweWaC. ### Languages Swedish (sv-SE) ## Dataset Structure ### Data Instances Here is a sample of the data: ```python { 'id': 190, 'raw_frequency': 117835.0, 'relative_frequency': 1033.61, 'cefr_level': 'A1', 'source': 'SweWaC', 'marker': 'en', 'lemma': 'dag', 'pos': 'noun-en', 'examples': 'e.g. god dag' } ``` This can be understood as: > The common noun "dag" ("day") has a rank of 190 in the list. It was used 117,835 times in SweWaC, meaning it occured 1033.61 times per million words. This word is among the most important vocabulary words for Swedish language learners and should be learned at the A1 CEFR level. An example usage of this word is the phrase "god dag" ("good day"). ### Data Fields - `id`: The row number for the data entry, starting at 1. Generally corresponds to the rank of the word. - `raw_frequency`: The raw frequency of the word. - `relative_frequency`: The relative frequency of the word measured in number of occurences per million words. - `cefr_level`: The CEFR level (A1, A2, B1, B2, C1, C2) of the word. - `source`: Whether the word came from SweWaC, translation lists (T2), or was manually added (manual). - `marker`: The grammatical marker of the word, if any, such as an article or infinitive marker. - `lemma`: The lemma of the word, sometimes provided with its spelling or stylistic variants. - `pos`: The word's part-of-speech. - `examples`: Usage examples and comments. Only available for some of the words. Manual entries were prepended to the list, giving them a higher rank than they might otherwise have had. For example, the manual entry "Göteborg ("Gothenberg") has a rank of 20, while the first non-manual entry "och" ("and") has a rank of 87. However, a conjunction and common stopword is far more likely to occur than the name of a city. ### Data Splits There is a single split, `train`. ## Dataset Creation Please refer to the article [Corpus-based approaches for the creation of a frequency based vocabulary list in the EU project KELLY – issues on reliability, validity and coverage](https://gup.ub.gu.se/publication/148533?lang=en) for information about how the original dataset was created and considerations for using the data. **The following changes have been made to the original dataset**: - Changed header names. - Normalized the large web-acquired corpus name to "SweWac" in the `source` field. - Set the relative frequency of manual entries to null rather than 1000000. ## Additional Information ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0) ### Citation Information Please cite the authors if you use this dataset in your work: ```bibtex @article{Kilgarriff2013, doi = {10.1007/s10579-013-9251-2}, url = {https://doi.org/10.1007/s10579-013-9251-2}, year = {2013}, month = sep, publisher = {Springer Science and Business Media {LLC}}, volume = {48}, number = {1}, pages = {121--163}, author = {Adam Kilgarriff and Frieda Charalabopoulou and Maria Gavrilidou and Janne Bondi Johannessen and Saussan Khalil and Sofie Johansson Kokkinakis and Robert Lew and Serge Sharoff and Ravikiran Vadlapudi and Elena Volodina}, title = {Corpus-based vocabulary lists for language learners for nine languages}, journal = {Language Resources and Evaluation} } ``` ### Contributions Thanks to [@spraakbanken](https://github.com/spraakbanken) for creating this dataset and to [@codesue](https://github.com/codesue) for adding it.
PlanTL-GOB-ES/wnli-es
2022-11-18T12:03:25.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|glue", "language:es", "license:cc-by-4.0", "region:us" ]
PlanTL-GOB-ES
professional translation into Spanish of Winograd NLI dataset as published in GLUE Benchmark. The Winograd NLI dataset presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
ADD CITATION
null
2
3
--- YAML tags: annotations_creators: - expert-generated language_creators: - found language: - es license: - cc-by-4.0 multilinguality: - monolingual pretty_name: wnli-es size_categories: - unknown source_datasets: - extended|glue task_categories: - text-classification task_ids: - natural-language-inference --- # WNLI-es ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html - **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es) ### Dataset Summary "A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html). The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0). This dataset is a professional translation into Spanish of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks). Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). ### Supported Tasks and Leaderboards Textual entailment, Text classification, Language Model. ### Languages * Spanish (es) ## Dataset Structure ### Data Instances Three tsv files. ### Data Fields - index - sentence 1: first sentence of the pair - sentence 2: second sentence of the pair - label: relation between the two sentences: * 0: the second sentence does not entail a correct interpretation of the first one (neutral) * 1: the second sentence entails a correct interpretation of the first one (entailment) ### Data Splits - wnli-train-es.csv: 636 sentence pairs - wnli-dev-es.csv: 72 sentence pairs - wnli-test-shuffled-es.csv: 147 sentence pairs ## Dataset Creation ### Curation Rationale We translated this dataset to contribute to the development of language models in Spanish. ### Source Data - [GLUE Benchmark site](https://gluebenchmark.com) #### Initial Data Collection and Normalization This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Spanish, commissioned by [BSC TeMU](https://temu.bsc.es/) within the the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx). For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html). #### Who are the source language producers? For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html). ### Annotations #### Annotation process We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Spanish. #### Who are the annotators? Translation was commisioned to a professional translation agency. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset This dataset contributes to the development of language models in Spanish. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es). For further information, send an email to (plantl-gob-es@bsc.es). This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx). ### Licensing information This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License. Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Contributions [N/A]
skytnt/fbanimehq
2022-10-23T14:02:23.000Z
[ "task_categories:unconditional-image-generation", "size_categories:100K<n<1M", "source_datasets:original", "license:cc0-1.0", "region:us" ]
skytnt
FBAnimeHQ is a dataset with high-quality full-body anime girl images in a resolution of 1024 × 512.
null
null
10
3
--- annotations_creators: [] language: [] language_creators: [] license: - cc0-1.0 multilinguality: [] pretty_name: Full Body Anime HQ size_categories: - 100K<n<1M source_datasets: - original tags: [] task_categories: - unconditional-image-generation task_ids: [] --- ## Dataset Description FBAnimeHQ is a dataset with high-quality full-body anime girl images in a resolution of 1024 × 512. ### Dataset Summary The dataset contains 112,806 images. All images are on white background ### Collection Method #### v1.0 Collect from danbooru website. Use yolov5 to detect and clip image. Use anime-segmentation to remove background. Use deepdanbooru to filter image. Finally clean the dataset manually. #### v2.0 Base on v1.0, use Novelai image-to-image to enhance and expand the dataset. ### Contributions Thanks to [@SkyTNT](https://github.com/SkyTNT) for adding this dataset.
open-source-metrics/document-question-answering-checkpoint-downloads
2022-10-06T19:32:29.000Z
[ "region:us" ]
open-source-metrics
null
null
null
0
3
Entry not found
thesofakillers/SemCor
2022-10-12T08:46:28.000Z
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:other", "word sense disambiguation"...
thesofakillers
null
null
null
2
3
--- annotations_creators: - expert-generated language: - en language_creators: - expert-generated license: - other multilinguality: - monolingual pretty_name: SemCor size_categories: - 100K<n<1M source_datasets: - original tags: - word sense disambiguation - semcor - wordnet task_categories: - text-classification task_ids: - topic-classification --- # Dataset Card for SemCor ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://web.eecs.umich.edu/~mihalcea/downloads.html#semcor - **Repository:** - **Paper:** https://aclanthology.org/H93-1061/ - **Leaderboard:** - **Point of Contact:** ### Dataset Summary SemCor 3.0 was automatically created from SemCor 1.6 by mapping WordNet 1.6 to WordNet 3.0 senses. SemCor 1.6 was created and is property of Princeton University. Some (few) word senses from WordNet 1.6 were dropped, and therefore they cannot be retrieved anymore in the 3.0 database. A sense of 0 (wnsn=0) is used to symbolize a missing sense in WordNet 3.0. The automatic mapping was performed within the Language and Information Technologies lab at UNT, by Rada Mihalcea (rada@cs.unt.edu). THIS MAPPING IS PROVIDED "AS IS" AND UNT MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, UNT MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. In agreement with the license from Princeton Univerisity, you are granted permission to use, copy, modify and distribute this database for any purpose and without fee and royalty is hereby granted, provided that you agree to comply with the Princeton copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the database, including modifications that you make for internal use or for distribution. Both LICENSE and README files distributed with the SemCor 1.6 package are included in the current distribution of SemCor 3.0. ### Languages English ## Additional Information ### Licensing Information WordNet Release 1.6 Semantic Concordance Release 1.6 This software and database is being provided to you, the LICENSEE, by Princeton University under the following license. By obtaining, using and/or copying this software and database, you agree that you have read, understood, and will comply with these terms and conditions.: Permission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution. WordNet 1.6 Copyright 1997 by Princeton University. All rights reserved. THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS. The name of Princeton University or Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database. Title to copyright in this software, database and any associated documentation shall at all times remain with Princeton University and LICENSEE agrees to preserve same. ### Citation Information ```bibtex @inproceedings{miller-etal-1993-semantic, title = "A Semantic Concordance", author = "Miller, George A. and Leacock, Claudia and Tengi, Randee and Bunker, Ross T.", booktitle = "{H}uman {L}anguage {T}echnology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993", year = "1993", url = "https://aclanthology.org/H93-1061", } ``` ### Contributions Thanks to [@thesofakillers](https://github.com/thesofakillers) for adding this dataset, converting from xml to csv.
m1guelpf/nouns
2022-09-25T06:18:40.000Z
[ "task_categories:text-to-image", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc0-1.0", "region:us" ]
m1guelpf
null
null
null
7
3
--- license: cc0-1.0 annotations_creators: - machine-generated language: - en language_creators: - other multilinguality: - monolingual pretty_name: 'Nouns auto-captioned' size_categories: - 10K<n<100K tags: [] task_categories: - text-to-image task_ids: [] --- # Dataset Card for Nouns auto-captioned _Dataset used to train Nouns text to image model_ Automatically generated captions for Nouns from their attributes, colors and items. Help on the captioning script appreciated! For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided. ## Citation If you use this dataset, please cite it as: ``` @misc{piedrafita2022nouns, author = {Piedrafita, Miguel}, title = {Nouns auto-captioned}, year={2022}, howpublished= {\url{https://huggingface.co/datasets/m1guelpf/nouns/}} } ```
mathemakitten/glue-suite
2022-10-18T15:56:01.000Z
[ "region:us" ]
mathemakitten
null
null
null
0
3
Entry not found
datnth1709/VLSP2016-NER-data
2022-09-27T08:53:25.000Z
[ "region:us" ]
datnth1709
null
null
null
0
3
Entry not found
winfried/gnn_bvp_solver
2022-09-27T16:52:13.000Z
[ "license:mit", "arxiv:2206.14092", "region:us" ]
winfried
null
null
null
0
3
--- license: mit --- Dataset for paper: Learning the Solution Operator of Boundary Value Problems using Graph Neural Networks https://arxiv.org/abs/2206.14092
nielsr/markuplm-toy-dataset
2022-09-30T09:09:43.000Z
[ "region:us" ]
nielsr
null
null
null
0
3
Entry not found
vesteinn/FC3
2023-03-23T15:51:34.000Z
[ "language:fo", "license:cc", "region:us" ]
vesteinn
null
null
null
1
3
--- license: cc language: - fo pretty_name: FC3 --- This is the Faroese Common Crawl corpus. The largest dataset of mono-lingual Faroese text, it was extracted from the Common Crawl. If you find this dataset useful, please cite ``` @inproceedings{snaebjarnarson-etal-2023-transfer, title = "{T}ransfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese", author = "Snæbjarnarson, Vésteinn and Simonsen, Annika and Glavaš, Goran and Vulić, Ivan", booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)", month = "may 22--24", year = "2023", address = "Tórshavn, Faroe Islands", publisher = {Link{\"o}ping University Electronic Press, Sweden}, } ```
pking/SMG-NFT
2022-10-04T19:31:50.000Z
[ "task_categories:text-to-image", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:cc-by-nc-sa-4.0", "region:us" ]
pking
null
null
null
0
3
--- license: cc-by-nc-sa-4.0 annotations_creators: - machine-generated language: - en language_creators: - other multilinguality: - monolingual pretty_name: 'SMG-NFT' size_categories: - n<1K source_datasets: - tags: [] task_categories: - text-to-image task_ids: [] --- # Dataset Card for SMG-NFT ## Examples ## Citation
shamanez/RAG-end2end
2022-10-01T00:22:06.000Z
[ "region:us" ]
shamanez
null
null
null
0
3
Entry not found
Harsit/xnli2.0_arabic
2022-10-05T05:31:10.000Z
[ "region:us" ]
Harsit
null
null
null
0
3
Entry not found
venetis/twitter_us_airlines_kaggle
2022-10-06T18:28:56.000Z
[ "license:afl-3.0", "region:us" ]
venetis
null
null
null
0
3
--- license: afl-3.0 --- Dataset link: https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment?sort=most-comments
BUDDI-AI/BUDDI-Table-Factory
2022-10-10T08:14:05.000Z
[ "license:apache-2.0", "region:us" ]
BUDDI-AI
null
null
null
0
3
--- license: apache-2.0 --- ***About*** We release BTF1K dataset, which contains 1000 synthetically generated documents with table and cell annotations. The dataset was generated synthetically using BUDDI Table Factory.
Sachinkelenjaguri/Resume_dataset
2022-10-06T12:04:31.000Z
[ "region:us" ]
Sachinkelenjaguri
null
null
null
2
3
Entry not found
meliascosta/wiki_academic_subjects
2022-12-05T20:16:02.000Z
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-3.0", "hierarchical", "acade...
meliascosta
null
null
null
4
3
--- license: cc-by-3.0 annotations_creators: - crowdsourced language: - en language_creators: - crowdsourced multilinguality: - monolingual paperswithcode_id: wikitext-2 pretty_name: Wikipedia Outline of Academic Disciplines size_categories: - 10K<n<100K source_datasets: - original tags: - hierarchical - academic - tree - dag - topics - subjects task_categories: - text-classification task_ids: - multi-label-classification --- # Dataset Card for Wiki Academic Disciplines` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset was created from the [English wikipedia](https://meta.wikimedia.org/wiki/Data_dump_torrents#English_Wikipedia) dump of January 2022. The main goal was to train a hierarchical classifier of academic subjects using [HiAGM](https://github.com/Alibaba-NLP/HiAGM). ### Supported Tasks and Leaderboard Text classification - No leaderboard at the moment. ### Languages English ## Dataset Structure The dataset consists of groups of labeled text chunks (tokenized by spaces and with stopwords removed). Labels are organized in a hieararchy (a DAG with a special Root node) of academic subjects. Nodes correspond to entries in the [outline of academic disciplines](https://en.wikipedia.org/wiki/Outline_of_academic_disciplines) article from Wikipedia. ### Data Instances Data is split in train/test/val each on a separate `.jsonl` file. Label hierarchy is listed a as TAB separated adjacency list on a `.taxonomy` file. ### Data Fields JSONL files contain only two fields: a "token" field which holds the text tokens and a "label" field which holds a list of labels for that text. ### Data Splits 80/10/10 TRAIN/TEST/VAL schema ## Dataset Creation All texts where extracted following the linked articles on [outline of academic disciplines](https://en.wikipedia.org/wiki/Outline_of_academic_disciplines) ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Wiki Dump #### Who are the source language producers? Wikipedia community. ### Annotations #### Annotation process Texts where automatically assigned to their linked academic discipline #### Who are the annotators? Wikipedia Community. ### Personal and Sensitive Information All information is public. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Creative Commons 3.0 (see [Wikipedia:Copyrights](https://en.wikipedia.org/wiki/Wikipedia:Copyrights)) ### Citation Information 1. Zhou, Jie, et al. "Hierarchy-aware global model for hierarchical text classification." Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020. ### Contributions Thanks to [@meliascosta](https://github.com/meliascosta) for adding this dataset.
frankier/multiscale_rotten_tomatoes_critic_reviews
2022-11-04T12:09:34.000Z
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:sentiment-scoring", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:cc0-1.0", "reviews", "ratings", "ordinal", "text", "region:us" ]
frankier
null
null
null
0
3
--- language: - en language_creators: - found license: cc0-1.0 multilinguality: - monolingual size_categories: - 100K<n<1M tags: - reviews - ratings - ordinal - text task_categories: - text-classification task_ids: - text-scoring - sentiment-scoring --- Cleaned up version of the rotten tomatoes critic reviews dataset. The original is obtained from Kaggle: https://www.kaggle.com/datasets/stefanoleone992/rotten-tomatoes-movies-and-critic-reviews-dataset Data has been scraped from the publicly available website https://www.rottentomatoes.com as of 2020-10-31. The clean up process drops anything without both a review and a rating, as well as standardising the ratings onto several integer, ordinal scales. Requires the `kaggle` library to be installed, and kaggle API keys passed through environment variables or in ~/.kaggle/kaggle.json. See [the Kaggle docs](https://www.kaggle.com/docs/api#authentication). A processed version is available at https://huggingface.co/datasets/frankier/processed_multiscale_rt_critics
argilla/sentiment-banking
2022-10-07T13:22:00.000Z
[ "region:us" ]
argilla
null
null
null
0
3
Entry not found
alkzar90/rock-glacier-dataset
2022-12-19T02:36:59.000Z
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:human-curator", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
alkzar90
TODO: Add a description...
@ONLINE {rock-glacier-dataset, author="CMM-Glaciares", title="Rock Glacier Dataset", month="October", year="2022", url="https://github.com/alcazar90/rock-glacier-detection" }
null
2
3
--- annotations_creators: - human-curator language: - en license: - mit pretty_name: RockGlacier size_categories: - 1K<n<10K source_datasets: - original task_categories: - image-classification task_ids: - multi-class-image-classification --- # Dataset Card for Rock Glacier Detection ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [RockGlacier Homepage](https://github.com/alcazar90/rock-glacier-detection) - **Repository:** [alcazar90/rock-glacier-detection](https://github.com/alcazar90/rock-glacier-detection) - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** N/A ### Dataset Summary ![](https://huggingface.co/datasets/alkzar90/rock-glacier-dataset/resolve/main/assets/rock-glacier-portrait2.png) Rock Glacier Detection dataset with satelital images of rock glaciers in the Chilean Andes. ### Supported Tasks and Leaderboards - `image-classification`: Based on a satelitel images (from sentinel2), the goal of this task is to predict a rock glacier in the geographic area, if there any. - `image-segmentation`: ... ### Languages Spanish ## Dataset Structure ### Data Instances A sample from the image-classification training set is provided below: ``` df = load_dataset("alkzar90/rock-glacier-dataset", name="image-classification") df["train"][666] > {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=128x128 at 0x7FB2EC58C6D0>, 'labels': 0, 'path': 'train/cordillera/1512.png' } ``` A sample from the image-segmentation training set is provided below: ``` df = load_dataset("alkzar90/rock-glacier-dataset", name="image-segmentation") df["train"][666] > {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=128x128 at 0x7FB2EB7C1160>, 'masks': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=128x128 at 0x7FB2EC5A08E0>, 'path': 'train/cordillera/1512.png'} ``` ### Data Fields The data instances have the following fields: - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `labels`: an `int` classification label. Class Label Mappings: ```json { "cordillera": 0 "glaciar": 1, } ``` ### Data Splits | |train|validation| test| |-------------|----:|---------:|-----:| |# of examples|7875 |1125 |2700 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @ONLINE {rock-glacier-dataset, author="CMM - Glaciares (UChile)", title="Rock Glacier Dataset", month="October", year="2022", url="https://github.com/alcazar90/rock-glacier-detection" } ``` ### Contributions Thanks to...
eliolio/docvqa
2022-10-11T21:10:16.000Z
[ "task_ids:document-question-answering", "language:en", "arxiv:2007.00398", "region:us" ]
eliolio
null
null
null
0
3
--- language: - en paperswithcode_id: docvqa pretty_name: DocVQA - A Dataset for VQA on Document Images task_ids: - document-question-answering --- # DocVQA: A Dataset for VQA on Document Images The DocVQA dataset can be downloaded from the [challenge page](https://rrc.cvc.uab.es/?ch=17) in RRC portal ("Downloads" tab). ## Dataset Structure The DocVQA comprises 50, 000 questions framed on 12,767 images. The data is split randomly in an 80−10−10 ratio to train, validation and test splits. - Train split: 39,463 questions, 10,194 images - Validation split: 5,349 questions and 1,286 images - Test split has 5,188 questions and 1,287 images. ## Resources and Additional Information - More information can be found on the [challenge page](https://rrc.cvc.uab.es/?ch=17) and in the [DocVQA paper](https://arxiv.org/abs/2007.00398). - Document images are taken from the [UCSF Industry Documents Library](https://www.industrydocuments.ucsf.edu/). It consists of a mix of printed, typewritten and handwritten content. A wide variety of document types appears in this dataset including letters, memos, notes, reports etc. ## Citation Information ``` @InProceedings{mathew2021docvqa, author = {Mathew, Minesh and Karatzas, Dimosthenis and Jawahar, CV}, title = {Docvqa: A dataset for vqa on document images}, booktitle = {Proceedings of the IEEE/CVF winter conference on applications of computer vision}, year = {2021}, pages = {2200--2209}, } ```
Thamognya/ALotNLI
2022-10-13T12:58:20.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:snli", "source_datasets:multi_nli", "source_datasets:anli", "language:en", "...
Thamognya
null
null
null
1
3
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - agpl-3.0 multilinguality: - monolingual pretty_name: A Lot of NLI size_categories: - 100K<n<1M source_datasets: - snli - multi_nli - anli task_categories: - text-classification task_ids: - natural-language-inference viewer: true --- # Repo Github Repo: [thamognya/TBertNLI](https://github.com/thamognya/TBertNLI) specifically in the [src/data directory](https://github.com/thamognya/TBertNLI/tree/master/src/data). # Sample ``` premise hypothesis label 0 this church choir sings to the masses as they ... the church is filled with song 0 1 this church choir sings to the masses as they ... a choir singing at a baseball game 2 2 a woman with a green headscarf blue shirt and ... the woman is young 1 3 a woman with a green headscarf blue shirt and ... the woman is very happy 0 4 a woman with a green headscarf blue shirt and ... the woman has been shot 2 ``` # Datsets Origin As of now the marked datasets have been used to make this dataset and the other ones are todo - [x] SNLI - [x] MultiNLI - SuperGLUE - FEVER - WIKI-FACTCHECK - [x] ANLI - more from huggingface # Reasons Just for finetuning of NLI models and purely made for NLI (not zero shot classification)
joey234/nan-nli
2022-10-13T23:18:18.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "negation", "regi...
joey234
null
null
null
0
3
--- annotations_creators: - expert-generated language: - en language_creators: - expert-generated license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: nan-nli size_categories: - n<1K source_datasets: - original tags: - negation task_categories: - text-classification task_ids: - natural-language-inference --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards Natural Language Inference Text Classification ### Languages en ## Dataset Structure ### Data Instances ### Data Fields premise: hypothesis: label: ### Data Splits Evaluation: 258 samples ## Dataset Creation ### Curation Rationale Extracting samples corresponding to different linguistics constructions of negation. ### Source Data Geoffrey K. Pullum and Rodney Huddleston. 2002. Negation, chapter 9. Cambridge University Press. #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The annotators are the authors of the papers, one of whom holds a graduate degree in linguistics. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@joey234](https://github.com/joey234) for adding this dataset.
jamescalam/channel-metadata
2022-10-26T01:05:55.000Z
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:afl-3.0", "youtube", "video", "video metadata", "tech", "science and tech", "region:us"...
jamescalam
null
null
null
1
3
--- annotations_creators: - no-annotation language: - en language_creators: - found license: - afl-3.0 multilinguality: - monolingual pretty_name: Tech Channels Metadata size_categories: - 10K<n<100K source_datasets: - original tags: - youtube - video - video metadata - tech - science and tech task_categories: - other task_ids: [] --- Dataset containing video metadata from a few tech channels, i.e. * [James Briggs](https://youtube.com/c/JamesBriggs) * [Yannic Kilcher](https://www.youtube.com/c/YannicKilcher) * [sentdex](https://www.youtube.com/c/sentdex) * [Daniel Bourke](https://www.youtube.com/channel/UCr8O8l5cCX85Oem1d18EezQ) * [AI Coffee Break with Letitia](https://www.youtube.com/c/AICoffeeBreak) * [Alex Ziskind](https://youtube.com/channel/UCajiMK_CY9icRhLepS8_3ug)
w0lfandbehem0th/test-images
2022-10-17T03:46:04.000Z
[ "license:apache-2.0", "region:us" ]
w0lfandbehem0th
null
null
null
0
3
--- license: apache-2.0 ---
cjvt/sloie
2022-10-21T07:36:18.000Z
[ "task_categories:text-classification", "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "language:sl", "license:cc-by-nc-sa-4.0", "idiom-detection", ...
cjvt
SloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29,400 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the Slovene Lexical Database (http://hdl.handle.net/11356/1030). Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus.
@article{skvorc2022mice, title = {MICE: Mining Idioms with Contextual Embeddings}, journal = {Knowledge-Based Systems}, volume = {235}, pages = {107606}, year = {2022}, issn = {0950-7051}, doi = {https://doi.org/10.1016/j.knosys.2021.107606}, url = {https://www.sciencedirect.com/science/article/pii/S0950705121008686}, author = {{\v S}kvorc, Tadej and Gantar, Polona and Robnik-{\v S}ikonja, Marko}, }
null
0
3
--- annotations_creators: - expert-generated language_creators: - found language: - sl license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K - 100K<n<1M source_datasets: [] task_categories: - text-classification - token-classification task_ids: [] pretty_name: Dataset of Slovene idiomatic expressions SloIE tags: - idiom-detection - multiword-expression-detection --- # Dataset Card for SloIE ### Dataset Summary SloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29399 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the [Slovene Lexical Database]( (http://hdl.handle.net/11356/1030). Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus. For a more detailed description of the dataset, please see the paper Škvorc et al. (2022) - see below. ### Supported Tasks and Leaderboards Idiom detection. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ```json { 'sentence': 'Fantje regljajo v enem kotu, deklice pa svoje obrazke barvajo s pisanimi barvami.', 'expression': 'barvati kaj s črnimi barvami', 'word_order': [11, 10, 12, 13, 14], 'sentence_words': ['Fantje', 'regljajo', 'v', 'enem', 'kotu,', 'deklice', 'pa', 'svoje', 'obrazke', 'barvajo', 's', 'pisanimi', 'barvami.'], 'is_idiom': ['*', '*', '*', '*', '*', '*', '*', '*', 'NE', 'NE', 'NE', 'NE', 'NE'] } ``` In this `sentence`, the words of the expression "barvati kaj s črnimi barvami" are used in a literal sense, as indicated by the "NE" annotations inside `is_idiom`. The "*" annotations indicate the words are not part of the expression. ### Data Fields - `sentence`: raw sentence in string form - **WARNING**: this is at times slightly different from the words inside `sentence_words` (e.g., "..." here could be "." in `sentence_words`); - `expression`: the annotated idiomatic expression; - `word_order`: numbers indicating the positions of tokens that belong to the expression; - `sentence_words`: words in the sentence; - `is_idiom`: a string denoting whether each word has an idiomatic (`"DA"`), literal (`"NE"`), or ambiguous (`"NEJASEN ZGLED"`) meaning. `"*"` means that the word is not part of the expression. ## Additional Information ### Dataset Curators Tadej Škvorc, Polona Gantar, Marko Robnik-Šikonja. ### Licensing Information CC BY-NC-SA 4.0. ### Citation Information ``` @article{skvorc2022mice, title = {MICE: Mining Idioms with Contextual Embeddings}, journal = {Knowledge-Based Systems}, volume = {235}, pages = {107606}, year = {2022}, doi = {https://doi.org/10.1016/j.knosys.2021.107606}, url = {https://www.sciencedirect.com/science/article/pii/S0950705121008686}, author = {{\v S}kvorc, Tadej and Gantar, Polona and Robnik-{\v S}ikonja, Marko}, } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
arielazzi/common-voice-pt-wav
2022-10-17T21:16:33.000Z
[ "region:us" ]
arielazzi
null
null
null
0
3
Entry not found
GrainsPolito/BBBicycles
2022-10-20T11:14:59.000Z
[ "license:cc-by-nc-4.0", "region:us" ]
GrainsPolito
null
null
null
0
3
--- license: cc-by-nc-4.0 --- # Dataset Card for BBBicycles ## Dataset Summary Bent & Broken Bicycles (BBBicycles) dataset is a benchmark set for the novel task of **damaged object re-identification**, which aims to identify the same object in multiple images even in the presence of breaks, deformations, and missing parts. You can find an interactive preview [here](https://huggingface.co/spaces/GrainsPolito/BBBicyclesPreview). ## Dataset Structure The final dataset contains: - Total of 39,200 image - 2,800 unique IDs - 20 models - 140 IDs for each model <table border-collapse="collapse"> <tr> <td><b style="font-size:25px">Information for each ID:</b></td> <td><b style="font-size:25px">Information for each render:</b></td> </tr> <tr> <td> <ul> <li>Model</li> <li>Type</li> <li>Texture type</li> <li>Stickers</li> </ul> </td> <td> <ul> <li>Background</li> <li>Viewing Side</li> <li>Focal Length</li> <li>Presence of dirt</li> </ul> </td> </tr> </table> ### Citation Information ``` @inproceedings{bbb_2022, title={Bent & Broken Bicycles: Leveraging synthetic data for damaged object re-identification}, author={Luca Piano, Filippo Gabriele Pratticò, Alessandro Sebastian Russo, Lorenzo Lanari, Lia Morra, Fabrizio Lamberti}, booktitle={2022 IEEE Winter Conference on Applications of Computer Vision (WACV)}, year={2022}, organization={IEEE} } ``` ### Credits The authors gratefully acknowledge the financial support of Reale Mutua Assicurazioni.
ConvLab/crosswoz
2022-11-25T09:01:44.000Z
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:zh", "license:apache-2.0", "region:us" ]
ConvLab
null
null
null
1
3
--- language: - zh license: - apache-2.0 multilinguality: - monolingual pretty_name: CrossWOZ size_categories: - 1K<n<10K task_categories: - conversational --- # Dataset Card for CrossWOZ - **Repository:** https://github.com/thu-coai/CrossWOZ - **Paper:** https://aclanthology.org/2020.tacl-1.19/ - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('crosswoz') ontology = load_ontology('crosswoz') database = load_database('crosswoz') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary CrossWOZ is the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides. We also provide a user simulator and several benchmark models for pipelined taskoriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. - **How to get the transformed data from original data:** - Run `python preprocess.py` in the current directory. Need `../../crosswoz/` as the original data. - **Main changes of the transformation:** - Add simple description for domains, slots, and intents. - Switch intent&domain of `General` dialog acts => domain == 'General' and intent in ['thank','bye','greet','welcome'] - Binary dialog acts include: 1) domain == 'General'; 2) intent in ['NoOffer', 'Request', 'Select']; 3) slot in ['酒店设施'] - Categorical dialog acts include: slot in ['酒店类型', '车型', '车牌'] - Non-categorical dialogue acts: others. assert intent in ['Inform', 'Recommend'] and slot != 'none' and value != 'none' - Transform original user goal to list of `{domain: {'inform': {slot: [value, mentioned/not mentioned]}, 'request': {slot: [value, mentioned/not mentioned]}}}`, stored as `user_state` of user turns. - Transform `sys_state_init` (first API call of system turns) without `selectedResults` as belief state in user turns. - Transform `sys_state` (last API call of system turns) to `db_query` with domain states that contain non-empty `selectedResults`. The `selectedResults` are saved as `db_results` (only contain entity name). Both stored in system turns. - **Annotations:** - user goal, user state, dialogue acts, state, db query, db results. - Multiple values in state are separated by spaces, meaning all constraints should be satisfied. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG, E2E, User simulator ### Languages Chinese ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 5012 | 84674 | 16.89 | 20.55 | 3.02 | 99.67 | - | 100 | 94.39 | | validation | 500 | 8458 | 16.92 | 20.53 | 3.04 | 99.62 | - | 100 | 94.36 | | test | 500 | 8476 | 16.95 | 20.51 | 3.08 | 99.61 | - | 100 | 94.85 | | all | 6012 | 101608 | 16.9 | 20.54 | 3.03 | 99.66 | - | 100 | 94.43 | 6 domains: ['景点', '餐馆', '酒店', '地铁', '出租', 'General'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @article{zhu2020crosswoz, author = {Qi Zhu and Kaili Huang and Zheng Zhang and Xiaoyan Zhu and Minlie Huang}, title = {Cross{WOZ}: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset}, journal = {Transactions of the Association for Computational Linguistics}, year = {2020} } ``` ### Licensing Information Apache License, Version 2.0
Poupou/Gitcoin-ODS-Hackhaton-GR15
2022-10-30T14:56:15.000Z
[ "task_categories:feature-extraction", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:mit", "Gitcoin", "Gitcoin Grants", "Sybil", "Sybil Slayers", "FDD",...
Poupou
null
null
null
0
3
--- annotations_creators: - no-annotation language: - en language_creators: - expert-generated license: - mit multilinguality: - monolingual pretty_name: Gitcoin FDD Open Data Science Hackathon GR15 size_categories: - 1M<n<10M source_datasets: - original tags: - Gitcoin - Gitcoin Grants - Sybil - Sybil Slayers - FDD - Web3 - Public Goods - Fraud Detection - DAO - Ethereum - Polygon task_categories: - feature-extraction task_ids: [] --- # Dataset Card for [Gitcoin ODS Hackathon GR15] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://gitcoin.co/issue/29389 - **Repository:** https://github.com/poupou-web3/GC-ODS-Sybil - **Point of Contact:** https://discord.com/channels/562828676480237578/1024788324826763284 ### Dataset Summary This data set was created in the context of the first [Gitcoin Open Data Science Hackathon](https://go.gitcoin.co/blog/open-data-science-hackathon). It contains all the transactions on the Ethereum and Polygon chains of the wallet that contributed to the Grant 15 of Gitcoin grants program. It was created in order to find patterns in the transactions of potential Sybil attackers by exploring their on-chain activity. ## Dataset Creation ### Source Data The wallet address from grant 15 was extracted from the data put together by the Gitcoin DAO. [GR_15_DATA](https://drive.google.com/drive/folders/17OdrV7SA0I56aDMwqxB6jMwoY3tjSf5w) The data was produced using [Etherscan API](https://etherscan.io/) and [PolygonScan API](https://polygonscan.com/) and using scripts available later at [repo](https://github.com/poupou-web3/GC-ODS-Sybil). An address contributing to the [GR_15_DATA](https://drive.google.com/drive/folders/17OdrV7SA0I56aDMwqxB6jMwoY3tjSf5w) with no found transaction on a chain will not appear in the data gathered. ** Careful the transaction data only contains "normal" transactions as described by the API provider.** ## Dataset Structure ### Data Instances There are 4 CSV files. - 2 for transactions: one for the Ethereum transactions and one for the Polygon transactions. - 2 for features: one for the Ethereum transactions and one for the Polygon transactions. ### Data Fields As provided by the [Etherscan API](https://etherscan.io/) and [PolygonScan API](https://polygonscan.com/). A column address was added for easier manipulation and to have all the transactions of all addresses in the same file. It is an unsupervised machine-learning task, there is no target column. Most of the extracted features have been extracted using [tsfresh](https://tsfresh.readthedocs.io/en/latest/). The code is available in the GitHub [repo](https://github.com/poupou-web3/GC-ODS-Sybil). It allows reproducing the extraction from the 2 transactions CSV. Column names are named by tsfresh, each feature can be found in the documentation for more detailed definitions. Following are the descriptions for features not explained in by tsfresh: - countUniqueInteracted : Count the number of unique addresses with which the wallet address has interacted. - countTx: The total number of transactions - ratioUniqueInteracted : countUniqueInteracted / countTx - outgoing: Number of outgoing transactions - outgoingRatio : outgoing / countTx ## Considerations for Using the Data ### Social Impact of Dataset The creation of the data set may help in fraud detection and defence in public goods funding. ## Additional Information ### Licensing Information MIT ### Citation Information Please cite this data set if you use it, especially in the hackathon context. ### Contributions Thanks to [@poupou-web3](https://github.com/poupou-web3) for adding this dataset.
matejklemen/vuamc
2022-10-26T08:50:42.000Z
[ "task_categories:text-classification", "task_categories:token-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "language:en", "licens...
matejklemen
The resource contains a selection of excerpts from BNC-Baby files that have been annotated for metaphor. There are four registers, each comprising about 50,000 words: academic texts, news texts, fiction, and conversations. Words have been separately labelled as participating in multi-word expressions (about 1.5%) or as discarded for metaphor analysis (0.02%). Main categories include words that are related to metaphor (MRW), words that signal metaphor (MFlag), and words that are not related to metaphor. For metaphor-related words, subdivisions have been made between clear cases of metaphor versus borderline cases (WIDLII, When In Doubt, Leave It In). Another parameter of metaphor-related words makes a distinction between direct metaphor, indirect metaphor, and implicit metaphor.
@book{steen2010method, title={A method for linguistic metaphor identification: From MIP to MIPVU}, author={Steen, Gerard and Dorst, Lettie and Herrmann, J. and Kaal, Anna and Krennmayr, Tina and Pasma, Trijntje}, volume={14}, year={2010}, publisher={John Benjamins Publishing} }
null
0
3
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - other multilinguality: - monolingual pretty_name: VUA Metaphor Corpus size_categories: - 10K<n<100K - 100K<n<1M source_datasets: [] tags: - metaphor-classification - multiword-expression-detection - vua20 - vua18 - mipvu task_categories: - text-classification - token-classification task_ids: - multi-class-classification --- # Dataset Card for VUA Metaphor Corpus **Important note#1**: This is a slightly simplified but mostly complete parse of the corpus. What is missing are lemmas and some metadata that was not important at the time of writing the parser. See the section `Simplifications` for more information on this. **Important note#2**: The dataset contains metadata - to ignore it and correctly remap the annotations, see the section `Discarding metadata`. ### Dataset Summary VUA Metaphor Corpus (VUAMC) contains a selection of excerpts from BNC-Baby files that have been annotated for metaphor. There are four registers, each comprising about 50 000 words: academic texts, news texts, fiction, and conversations. Words have been separately labelled as participating in multi-word expressions (about 1.5%) or as discarded for metaphor analysis (0.02%). Main categories include words that are related to metaphor (MRW), words that signal metaphor (MFlag), and words that are not related to metaphor. For metaphor-related words, subdivisions have been made between clear cases of metaphor versus borderline cases (WIDLII, When In Doubt, Leave It In). Another parameter of metaphor-related words makes a distinction between direct metaphor, indirect metaphor, and implicit metaphor. ### Supported Tasks and Leaderboards Metaphor detection, metaphor type classification. ### Languages English. ## Dataset Structure ### Data Instances A sample instance from the dataset: ``` { 'document_name': 'kcv-fragment42', 'words': ['', 'I', 'think', 'we', 'should', 'have', 'different', 'holidays', '.'], 'pos_tags': ['N/A', 'PNP', 'VVB', 'PNP', 'VM0', 'VHI', 'AJ0', 'NN2', 'PUN'], 'met_type': [ {'type': 'mrw/met', 'word_indices': [5]} ], 'meta': ['vocal/laugh', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A'] } ``` ### Data Fields The instances are ordered as they appear in the corpus. - `document_name`: a string containing the name of the document in which the sentence appears; - `words`: words in the sentence (`""` when the word represents metadata); - `pos_tags`: POS tags of the words, encoded using the BNC basic tagset (`"N/A"` when the word does not have an associated POS tag); - `met_type`: metaphors in the sentence, marked by their type and word indices; - `meta`: selected metadata tags providing additional context to the sentence. Metadata may not correspond to a specific word. In this case, the metadata is represented with an empty string (`""`) in `words` and a `"N/A"` tag in `pos_tags`. ## Dataset Creation For detailed information on the corpus, please check out the references in the `Citation Information` section or contact the dataset authors. ## Simplifications The raw corpus is equipped with rich metadata and encoded in the TEI XML format. The textual part is fully parsed except for the lemmas, i.e. all the sentences in the raw corpus are present in the dataset. However, parsing the metadata fully is unnecessarily tedious, so certain simplifications were made: - paragraph information is not preserved as the dataset is parsed at sentence level; - manual corrections (`<corr>`) of incorrectly written words are ignored, and the original, incorrect form of the words is used instead; - `<ptr>` and `<anchor>` tags are ignored as I cannot figure out what they represent; - the attributes `rendition` (in `<hi>` tags) and `new` (in `<shift>` tags) are not exposed. ## Discarding metadata The dataset contains rich metadata, which is stored in the `meta` attribute. To keep data aligned, empty words or `"N/A"`s are inserted into the other attributes. If you want to ignore the metadata and correct the metaphor type annotations, you can use code similar to the following snippet: ```python3 data = datasets.load_dataset("matejklemen/vuamc")["train"] data = data.to_pandas() for idx_ex in range(data.shape[0]): curr_ex = data.iloc[idx_ex] idx_remap = {} for idx_word, word in enumerate(curr_ex["words"]): if len(word) != 0: idx_remap[idx_word] = len(idx_remap) # Note that lists are stored as np arrays by datasets, while we are storing new data in a list! # (unhandled for simplicity) words, pos_tags, met_type = curr_ex[["words", "pos_tags", "met_type"]].tolist() if len(idx_remap) != len(curr_ex["words"]): words = list(filter(lambda _word: len(_word) > 0, curr_ex["words"])) pos_tags = list(filter(lambda _pos: _pos != "N/A", curr_ex["pos_tags"])) met_type = [] for met_info in curr_ex["met_type"]: met_type.append({ "type": met_info["type"], "word_indices": list(map(lambda _i: idx_remap[_i], met_info["word_indices"])) }) ``` ## Additional Information ### Dataset Curators Gerard Steen; et al. (please see http://hdl.handle.net/20.500.12024/2541 for the full list). ### Licensing Information Available for non-commercial use on condition that the terms of the [BNC Licence](http://www.natcorp.ox.ac.uk/docs/licence.html) are observed and that this header is included in its entirety with any copy distributed. ### Citation Information ``` @book{steen2010method, title={A method for linguistic metaphor identification: From MIP to MIPVU}, author={Steen, Gerard and Dorst, Lettie and Herrmann, J. and Kaal, Anna and Krennmayr, Tina and Pasma, Trijntje}, volume={14}, year={2010}, publisher={John Benjamins Publishing} } ``` ``` @inproceedings{leong-etal-2020-report, title = "A Report on the 2020 {VUA} and {TOEFL} Metaphor Detection Shared Task", author = "Leong, Chee Wee (Ben) and Beigman Klebanov, Beata and Hamill, Chris and Stemle, Egon and Ubale, Rutuja and Chen, Xianyang", booktitle = "Proceedings of the Second Workshop on Figurative Language Processing", year = "2020", url = "https://aclanthology.org/2020.figlang-1.3", doi = "10.18653/v1/2020.figlang-1.3", pages = "18--29" } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
svjack/pokemon-blip-captions-en-zh
2022-10-31T06:23:03.000Z
[ "task_categories:text-to-image", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:huggan/few-shot-pokemon", "language:en", "language:zh", "license:cc-by-nc-sa-4.0", "region:us" ]
svjack
null
null
null
9
3
--- license: cc-by-nc-sa-4.0 annotations_creators: - machine-generated language: - en - zh language_creators: - other multilinguality: - multilingual pretty_name: 'Pokémon BLIP captions' size_categories: - n<1K source_datasets: - huggan/few-shot-pokemon tags: [] task_categories: - text-to-image task_ids: [] --- # Dataset Card for Pokémon BLIP captions with English and Chinese. Dataset used to train Pokémon text to image model, add a Chinese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model. For each row the dataset contains image en_text (caption in English) and zh_text (caption in Chinese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided. The Chinese captions are translated by [Deepl](https://www.deepl.com/translator)
darrow-ai/USClassActionOutcomes_ExpertsAnnotations
2022-11-06T12:35:30.000Z
[ "license:gpl-3.0", "arxiv:2211.00582", "region:us" ]
darrow-ai
null
null
null
0
3
--- license: gpl-3.0 --- ## Dataset Description - **Homepage:** https://www.darrow.ai/ - **Repository:** https://github.com/darrow-labs/ClassActionPrediction - **Paper:** https://arxiv.org/abs/2211.00582 - **Leaderboard:** N/A - **Point of Contact:** [Gila Hayat](mailto:gila@darrow.ai) ### Dataset Summary USClassActions is an English dataset of 200 complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool. ### Data Instances ```python from datasets import load_dataset dataset = load_dataset('darrow-ai/USClassActionOutcomes_ExpertsAnnotations') ``` ### Data Fields `id`: (**int**) a unique identifier of the document \ `origin_label `: (**str**) the outcome of the case \ `target_text`: (**str**) the facts of the case \ `annotator_prediction `: (**str**) annotators predictions of the case outcome based on the target_text \ `annotator_confidence `: (**str**) the annotator's level of confidence in his outcome prediction \ ### Curation Rationale The dataset was curated by Darrow.ai (2022). ### Citation Information *Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus* *ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US* *Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022* ``` @InProceedings{darrow-niklaus-2022-uscp, author = {Semo, Gil and Bernsohn, Dor and Hagag, Ben and Hayat, Gila and Niklaus, Joel}, title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US}, booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop}, year = {2022}, location = {Abu Dhabi}, } ```
arbml/Arabic_Stories_Corpus
2022-10-25T23:28:10.000Z
[ "region:us" ]
arbml
null
null
null
0
3
Entry not found
quincyqiang/test
2022-10-27T08:17:23.000Z
[ "task_categories:text-classification", "task_ids:acceptability-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:sentiment-classification", "task_ids:text-scoring", "annotations_creators:other", "language_creators:other", "multilinguality:monol...
quincyqiang
null
null
null
0
3
--- annotations_creators: - other language_creators: - other language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - acceptability-classification - natural-language-inference - semantic-similarity-scoring - sentiment-classification - text-scoring paperswithcode_id: glue pretty_name: GLUE (General Language Understanding Evaluation benchmark) train-eval-index: - config: cola task: text-classification task_id: binary_classification splits: train_split: train eval_split: validation col_mapping: sentence: text label: target - config: sst2 task: text-classification task_id: binary_classification splits: train_split: train eval_split: validation col_mapping: sentence: text label: target - config: mrpc task: text-classification task_id: natural_language_inference splits: train_split: train eval_split: validation col_mapping: sentence1: text1 sentence2: text2 label: target - config: qqp task: text-classification task_id: natural_language_inference splits: train_split: train eval_split: validation col_mapping: question1: text1 question2: text2 label: target - config: stsb task: text-classification task_id: natural_language_inference splits: train_split: train eval_split: validation col_mapping: sentence1: text1 sentence2: text2 label: target - config: mnli task: text-classification task_id: natural_language_inference splits: train_split: train eval_split: validation_matched col_mapping: premise: text1 hypothesis: text2 label: target - config: mnli_mismatched task: text-classification task_id: natural_language_inference splits: train_split: train eval_split: validation col_mapping: premise: text1 hypothesis: text2 label: target - config: mnli_matched task: text-classification task_id: natural_language_inference splits: train_split: train eval_split: validation col_mapping: premise: text1 hypothesis: text2 label: target - config: qnli task: text-classification task_id: natural_language_inference splits: train_split: train eval_split: validation col_mapping: question: text1 sentence: text2 label: target - config: rte task: text-classification task_id: natural_language_inference splits: train_split: train eval_split: validation col_mapping: sentence1: text1 sentence2: text2 label: target - config: wnli task: text-classification task_id: natural_language_inference splits: train_split: train eval_split: validation col_mapping: sentence1: text1 sentence2: text2 label: target configs: - ax - cola - mnli - mnli_matched - mnli_mismatched - mrpc - qnli - qqp - rte - sst2 - stsb - wnli tags: - qa-nli - coreference-nli - paraphrase-identification --- # Dataset Card for GLUE ## Table of Contents - [Dataset Card for GLUE](#dataset-card-for-glue) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [ax](#ax) - [cola](#cola) - [mnli](#mnli) - [mnli_matched](#mnli_matched) - [mnli_mismatched](#mnli_mismatched) - [mrpc](#mrpc) - [qnli](#qnli) - [qqp](#qqp) - [rte](#rte) - [sst2](#sst2) - [stsb](#stsb) - [wnli](#wnli) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [ax](#ax-1) - [cola](#cola-1) - [mnli](#mnli-1) - [mnli_matched](#mnli_matched-1) - [mnli_mismatched](#mnli_mismatched-1) - [mrpc](#mrpc-1) - [qnli](#qnli-1) - [qqp](#qqp-1) - [rte](#rte-1) - [sst2](#sst2-1) - [stsb](#stsb-1) - [wnli](#wnli-1) - [Data Fields](#data-fields) - [ax](#ax-2) - [cola](#cola-2) - [mnli](#mnli-2) - [mnli_matched](#mnli_matched-2) - [mnli_mismatched](#mnli_mismatched-2) - [mrpc](#mrpc-2) - [qnli](#qnli-2) - [qqp](#qqp-2) - [rte](#rte-2) - [sst2](#sst2-2) - [stsb](#stsb-2) - [wnli](#wnli-2) - [Data Splits](#data-splits) - [ax](#ax-3) - [cola](#cola-3) - [mnli](#mnli-3) - [mnli_matched](#mnli_matched-3) - [mnli_mismatched](#mnli_mismatched-3) - [mrpc](#mrpc-3) - [qnli](#qnli-3) - [qqp](#qqp-3) - [rte](#rte-3) - [sst2](#sst2-3) - [stsb](#stsb-3) - [wnli](#wnli-3) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 955.33 MB - **Size of the generated dataset:** 229.68 MB - **Total amount of disk used:** 1185.01 MB ### Dataset Summary GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems. ### Supported Tasks and Leaderboards The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks: #### ax A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset. #### cola The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence. #### mnli The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. #### mnli_matched The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mnli_mismatched The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mrpc The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent. #### qnli The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. #### qqp The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. #### rte The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency. #### sst2 The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels. #### stsb The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5. #### wnli The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI). ### Languages The language data in GLUE is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances #### ax - **Size of downloaded dataset files:** 0.21 MB - **Size of the generated dataset:** 0.23 MB - **Total amount of disk used:** 0.44 MB An example of 'test' looks as follows. ``` { "premise": "The cat sat on the mat.", "hypothesis": "The cat did not sit on the mat.", "label": -1, "idx: 0 } ``` #### cola - **Size of downloaded dataset files:** 0.36 MB - **Size of the generated dataset:** 0.58 MB - **Total amount of disk used:** 0.94 MB An example of 'train' looks as follows. ``` { "sentence": "Our friends won't buy this analysis, let alone the next one we propose.", "label": 1, "id": 0 } ``` #### mnli - **Size of downloaded dataset files:** 298.29 MB - **Size of the generated dataset:** 78.65 MB - **Total amount of disk used:** 376.95 MB An example of 'train' looks as follows. ``` { "premise": "Conceptually cream skimming has two basic dimensions - product and geography.", "hypothesis": "Product and geography are what make cream skimming work.", "label": 1, "idx": 0 } ``` #### mnli_matched - **Size of downloaded dataset files:** 298.29 MB - **Size of the generated dataset:** 3.52 MB - **Total amount of disk used:** 301.82 MB An example of 'test' looks as follows. ``` { "premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.", "hypothesis": "Hierbas is a name worth looking out for.", "label": -1, "idx": 0 } ``` #### mnli_mismatched - **Size of downloaded dataset files:** 298.29 MB - **Size of the generated dataset:** 3.73 MB - **Total amount of disk used:** 302.02 MB An example of 'test' looks as follows. ``` { "premise": "What have you decided, what are you going to do?", "hypothesis": "So what's your decision?, "label": -1, "idx": 0 } ``` #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. #### ax - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### cola - `sentence`: a `string` feature. - `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1). - `idx`: a `int32` feature. #### mnli - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mnli_matched - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mnli_mismatched - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Splits #### ax | |test| |---|---:| |ax |1104| #### cola | |train|validation|test| |----|----:|---------:|---:| |cola| 8551| 1043|1063| #### mnli | |train |validation_matched|validation_mismatched|test_matched|test_mismatched| |----|-----:|-----------------:|--------------------:|-----------:|--------------:| |mnli|392702| 9815| 9832| 9796| 9847| #### mnli_matched | |validation|test| |------------|---------:|---:| |mnli_matched| 9815|9796| #### mnli_mismatched | |validation|test| |---------------|---------:|---:| |mnli_mismatched| 9832|9847| #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{warstadt2018neural, title={Neural Network Acceptability Judgments}, author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R}, journal={arXiv preprint arXiv:1805.12471}, year={2018} } @inproceedings{wang2019glue, title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding}, author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.}, note={In the Proceedings of ICLR.}, year={2019} } Note that each GLUE dataset has its own citation. Please see the source to see the correct citation for each contained dataset. ``` ### Contributions Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
abhinavk/openpi_v2
2022-11-07T02:23:34.000Z
[ "task_categories:question-answering", "task_categories:text-classification", "task_ids:entity-linking-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "r...
abhinavk
TEMPORARY DESCRIPTION
@inproceedings{ title={{OPENPI V2}: } author={} note={} year={2022} }
null
1
3
--- annotations_creators: - expert-generated language: - en language_creators: [] license: - cc-by-4.0 multilinguality: - monolingual pretty_name: openpi_v2 size_categories: - 10K<n<100K source_datasets: [] tags: [] task_categories: - question-answering - text-classification task_ids: - entity-linking-classification - natural-language-inference --- # Dataset Card for openpi_v2 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Open PI is the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. Our solution is a new task formulation in which just the text is provided, from which a set of state changes (entity, attribute, before, after) is generated for each step, where the entity, attribute, and values must all be predicted from an open vocabulary. ### Supported Tasks and Leaderboards - `Task 1`: Given paragraph (e.g., with 5 steps), identify entities that change (challenge: implicit entities, some explicit entities that don’t change) - `Task 3`: Given paragraph, identify the attributes of entity that change (challenge: implicit entities, attributes & many combinations) - `Task 4`: Given paragraph & an entity, identify the sequence of attribute value changes (challenge: implicit attributes) - `Task 7`: Given image url, identify the visual attributes of entity and non-visual attributes of entity that change ### Languages English ## Dataset Structure ### Data Instances A typical instance in the dataset: ``` { "goal": "goal1_text", "steps": [ "step1_text", "step2_text", ... ], "topics": "topic1_annotation", "image_urls": [ "step1_url_text", "step2_url_text", ... ], "states": [ { "answers_openpiv1_metadata": { "entity": "entity1 | entity2 | ...", "attribute": "attribute1 | attribute2 | ...", "answers": [ "before: step1_entity1_before | step1_entity2_before, after: step1_entity1_after | step1_entity2_after", ... ], "modality": [ "step1_entity1_modality_id | step1_entity2_modality_id", ... ] }, "entity": "entity1 | entity2 | ...", "attribute": "attribute1 | attribute2 | ...", "answers": [ "before: step1_entity1_before_merged | step1_entity2_before_merged, after: step1_entity1_after_merged | step1_entity2_after_merged", ... ] } ] } ``` ### Data Fields The following is an excerpt from the dataset README: Within "goal", "steps", "topics", and "image_urls", the fields should be self-explanatory. Listed below is an explanation about those within "states": #### Fields specific to questions: ### Data Splits Train, Valid, Dev ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
NbAiLab/mnli-norwegian
2022-11-23T09:45:12.000Z
[ "task_categories:sentence-similarity", "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-classification", "annotations_creators:expert-generated", "language_creators:machine-generated", "language_creators:expert-generated", "multilinguality:mul...
NbAiLab
null
null
null
1
3
--- annotations_creators: - expert-generated language: - 'no' - 'nob' - 'en' language_creators: - machine-generated - expert-generated license: - apache-2.0 multilinguality: - multilingual pretty_name: MNLI Norwegian size_categories: - 100K<n<1M source_datasets: [] tags: - norwegian - simcse - mnli - nli - sentence task_categories: - sentence-similarity - text-classification task_ids: - natural-language-inference - semantic-similarity-classification --- # MNLI Norwegian The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that it covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalisation evaluation. There is also a [HuggingFace version](https://huggingface.co/datasets/multi_nli) of the dataset available. This dataset is machine translated using Google Translate. From this translation different version of the dataset where created. Included in the repo is a version that is specifically suited for training sentence-BERT-models. This version include the triplet: base-entailment-contradiction. It also includes a version that mixes English and Norwegian, as well as both csv and json-verions. The script for generating the datasets are included in this repo. Please note that there is no test dataset for MNLI, since this is closed. The authors of MNLI informs us that they selected 7500 new contexts in the same way as the original MNLI contexts. That means the English part of the XNLI test sets is highly comparable. For each genre, the text is generally in-domain with the original MNLI test set (it's from the same source and selected by me in the same way). In most cases the XNLI test set can therefore be used. ### The following datasets are available in the repo: * mnli_no_en_for_simcse.csv * mnli_no_en_small_for_simcse.csv * mnli_no_for_simcse.csv * multinli_1.0_dev_matched_no_mt.jsonl * multinli_1.0_dev_mismatched_no_mt.jsonl * multinli_1.0_train_no_mt.jsonl * nli_for_simcse.csv * xnli_dev_no_mt.jsonl * xnli_test_no_mt.jsonl ### Licensing Information The majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere). The translation and compilation of the Norwegian part is released under the Creative Commons Attribution 3.0 Unported Licenses. ### Citation Information The datasets are compiled and machine translated by the AiLab at the Norwegian National Library. However, the vast majority of the work related to this dataset is compiling the English version. We therefore suggest that you also cite the original work: ``` @InProceedings{N18-1101, author = "Williams, Adina and Nangia, Nikita and Bowman, Samuel", title = "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference", booktitle = "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", year = "2018", publisher = "Association for Computational Linguistics", pages = "1112--1122", location = "New Orleans, Louisiana", url = "http://aclweb.org/anthology/N18-1101" }
pseeej/animal-crossing-data
2022-11-02T03:31:55.000Z
[ "region:us" ]
pseeej
null
null
null
2
3
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 7209776.0 num_examples: 389 download_size: 7181848 dataset_size: 7209776.0 --- # Dataset Card for "animal-crossing-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
qanastek/Biosses-BLUE
2022-11-05T23:23:58.000Z
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:gpl-3.0", "region...
qanastek
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article. The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows: very strong: 0.80–1.00 strong: 0.60–0.79 moderate: 0.40–0.59 weak: 0.20–0.39 very weak: 0.00–0.19
@article{10.1093/bioinformatics/btx238, author = {Soğancıoğlu, Gizem and Öztürk, Hakime and Özgür, Arzucan}, title = "{BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}", journal = {Bioinformatics}, volume = {33}, number = {14}, pages = {i49-i58}, year = {2017}, month = {07}, abstract = "{The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text.We propose several approaches for sentence-level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology-based approaches are presented that utilize general and domain-specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods.The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state-of-the-art domain-independent systems up to 42.6\\% in terms of the Pearson correlation metric.A web-based system for biomedical semantic sentence similarity computation, the source code, and the annotated benchmark data set are available at: http://tabilab.cmpe.boun.edu.tr/BIOSSES/.}", issn = {1367-4803}, doi = {10.1093/bioinformatics/btx238}, url = {https://doi.org/10.1093/bioinformatics/btx238}, eprint = {https://academic.oup.com/bioinformatics/article-pdf/33/14/i49/25157316/btx238.pdf}, }
null
1
3
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - gpl-3.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - text-scoring - semantic-similarity-scoring paperswithcode_id: biosses pretty_name: BIOSSES dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: score dtype: float32 splits: - name: train num_bytes: 32783 num_examples: 100 download_size: 36324 dataset_size: 32783 --- # Dataset Card for BIOSSES ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html - **Repository:** https://github.com/gizemsogancioglu/biosses - **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954) - **Point of Contact:** [Gizem Soğancıoğlu](gizemsogancioglu@gmail.com) and [Arzucan Özgür](gizemsogancioglu@gmail.com) ### Dataset Summary BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article. The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows: - very strong: 0.80–1.00 - strong: 0.60–0.79 - moderate: 0.40–0.59 - weak: 0.20–0.39 - very weak: 0.00–0.19 ### Data Splits (From BLUE Benchmark) |name|Train|Dev|Test| |:--:|:--:|:--:|:--:| |biosses|64|16|20| ### Supported Tasks and Leaderboards Biomedical Semantic Similarity Scoring. ### Languages English. ## Dataset Structure ### Data Instances For each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators). ```json { "id": "0", "sentence1": "Centrosomes increase both in size and in microtubule-nucleating capacity just before mitotic entry.", "sentence2": "Functional studies showed that, when introduced into cell lines, miR-146a was found to promote cell proliferation in cervical cancer cells, which suggests that miR-146a works as an oncogenic miRNA in these cancers.", "score": 0.0 } ``` ### Data Fields - `sentence 1`: string - `sentence 2`: string - `score`: float ranging from 0 (no relation) to 4 (equivalent) ## Dataset Creation ### Curation Rationale ### Source Data The [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/). ### Annotations #### Annotation process The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees. The table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset. | |Correlation r | |----------:|--------------:| |Annotator A| 0.952| |Annotator B| 0.958| |Annotator C| 0.917| |Annotator D| 0.902| |Annotator E| 0.941| ## Additional Information ### Dataset Curators - Gizem Soğancıoğlu, gizemsogancioglu@gmail.com - Hakime Öztürk, hakime.ozturk@boun.edu.tr - Arzucan Özgür, gizemsogancioglu@gmail.com Bogazici University, Istanbul, Turkey ### Licensing Information BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html). ### Citation Information ```bibtex @article{10.1093/bioinformatics/btx238, author = {Soğancıoğlu, Gizem and Öztürk, Hakime and Özgür, Arzucan}, title = "{BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}", journal = {Bioinformatics}, volume = {33}, number = {14}, pages = {i49-i58}, year = {2017}, month = {07}, abstract = "{The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text.We propose several approaches for sentence-level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology-based approaches are presented that utilize general and domain-specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods.The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state-of-the-art domain-independent systems up to 42.6\\% in terms of the Pearson correlation metric.A web-based system for biomedical semantic sentence similarity computation, the source code, and the annotated benchmark data set are available at: http://tabilab.cmpe.boun.edu.tr/BIOSSES/.}", issn = {1367-4803}, doi = {10.1093/bioinformatics/btx238}, url = {https://doi.org/10.1093/bioinformatics/btx238}, eprint = {https://academic.oup.com/bioinformatics/article-pdf/33/14/i49/25157316/btx238.pdf}, } ``` ### Contributions Thanks to [@qanastek](https://github.com/qanastek) for adding this dataset.
justinphan3110/vi_pubmed
2022-11-06T21:02:17.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:text-scoring", "task_ids:topic-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", ...
justinphan3110
null
null
null
1
3
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - other multilinguality: - monolingual size_categories: - 10M<n<100M source_datasets: - original task_categories: - text-generation - fill-mask - text-classification task_ids: - language-modeling - masked-language-modeling - text-scoring - topic-classification paperswithcode_id: pubmed pretty_name: ViPubMed split: - en - vi --- # Dataset Card for PubMed ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline.html]() - **Documentation:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline_documentation.html]() - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - English ## Dataset Structure Bear in mind the data comes from XML that have various tags that are hard to reflect in a concise JSON format. Tags and list are kind of non "natural" to XML documents leading this library to make some choices regarding data. "Journal" info was dropped altogether as it would have led to many fields being empty all the time. The hierarchy is also a bit unnatural but the choice was made to keep as close as possible to the original data for future releases that may change schema from NLM's side. Author has been kept and contains either "ForeName", "LastName", "Initials", or "CollectiveName". (All the fields will be present all the time, but only some will be filled) ### Data Instances ```json { "MedlineCitation": { "PMID": 0, "DateCompleted": {"Year": 0, "Month": 0, "Day": 0}, "NumberOfReferences": 0, "DateRevised": {"Year": 0, "Month": 0, "Day": 0}, "Article": { "Abstract": {"AbstractText": "Some abstract (can be missing)" }, "ArticleTitle": "Article title", "AuthorList": {"Author": [ {"FirstName": "John", "ForeName": "Doe", "Initials": "JD", "CollectiveName": ""} {"CollectiveName": "The Manhattan Project", "FirstName": "", "ForeName": "", "Initials": ""} ]}, "Language": "en", "GrantList": { "Grant": [], }, "PublicationTypeList": {"PublicationType": []}, }, "MedlineJournalInfo": {"Country": "France"}, "ChemicalList": {"Chemical": [{ "RegistryNumber": "XX", "NameOfSubstance": "Methanol" }]}, "CitationSubset": "AIM", "MeshHeadingList": { "MeshHeading": [], }, }, "PubmedData": { "ArticleIdList": {"ArticleId": "10.1002/bjs.1800650203"}, "PublicationStatus": "ppublish", "History": {"PubMedPubDate": [{"Year": 0, "Month": 0, "Day": 0}]}, "ReferenceList": [{"Citation": "Somejournal", "CitationId": 01}], }, } ``` ### Data Fields Main Fields will probably interest people are: - "MedlineCitation" > "Article" > "AuthorList" > "Author" - "MedlineCitation" > "Article" > "Abstract" > "AbstractText" - "MedlineCitation" > "Article" > "Article Title" - "MedlineCitation" > "ChemicalList" > "Chemical" - "MedlineCitation" > "NumberOfReferences" ### Data Splits There are no splits in this dataset. It is given as is. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html]() #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [https://www.nlm.nih.gov/databases/download/terms_and_conditions.html]() ### Citation Information [Courtesy of the U.S. National Library of Medicine](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html). ### Contributions Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
Ngadou/Spam_SMS
2022-11-10T09:06:25.000Z
[ "license:cc", "doi:10.57967/hf/0749", "region:us" ]
Ngadou
null
null
null
1
3
--- license: cc --- ## Description The Spam SMS is a set of SMS-tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam. Source: [uciml/sms-spam-collection-dataset](https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset)
ClemenKok/digimon-blip-captions
2022-11-13T02:08:54.000Z
[ "annotations_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "digimon", "region:us" ]
ClemenKok
null
null
null
0
3
--- annotations_creators: - machine-generated language: - en license: - cc-by-nc-4.0 multilinguality: - monolingual pretty_name: '1,071 BLIP captioned images of Digimon. ' size_categories: - 1K<n<10K source_datasets: - original tags: - digimon task_categories: [] task_ids: [] --- # Dataset Card for Digimon BLIP captions This project was inspired by the [labelled Pokemon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). The captions were generated using the BLIP Model found in the [LAVIS Library for Language-Vision Intelligence](https://github.com/salesforce/LAVIS). Like the Pokemon equivalent, each row in the dataset contains the `image` and `text` keys. `Image` is a varying size pixel jpeg, and `text` is the corresponding text caption. ## Citation If you use this dataset, please cite it as: ``` @misc{clemen2022digimon, author = {Kok, Clemen}, title = {Digimon BLIP captions}, year={2022}, howpublished= {\url{https://huggingface.co/datasets/ClemenKok/digimon-lavis-captions/}} } ```
statworx/swiss-dialects
2022-11-21T16:18:32.000Z
[ "task_categories:text-generation", "task_categories:text-classification", "task_ids:language-modeling", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:ch", "license:cc-by-nc-4.0", "dialect", "region:us" ]
statworx
null
null
null
1
3
--- annotations_creators: [] language: - ch language_creators: - found license: - cc-by-nc-4.0 multilinguality: - monolingual pretty_name: ArchiMob Corpus size_categories: - 10K<n<100K source_datasets: [] tags: - dialect task_categories: - text-generation - text-classification task_ids: - language-modeling --- # Dataset Card for ArchiMod Corpus ## Dataset Description - **Homepage:** https://wortschatz.uni-leipzig.de/en/download/Swiss%20German - **Repository:** https://huggingface.co/datasets/statworx/leipzip-swiss ### Dataset Summary The ArchiMob corpus represents German linguistic varieties spoken within the territory of Switzerland. This corpus is the first electronic resource containing long samples of transcribed text in Swiss German, intended for studying the spatial distribution of morphosyntactic features and for natural language processing. ### Languages Swiss-German ## Dataset Structure ### Data Instances `` { 'sentence': Sentence in Swiss-German, 'label': Dialect as category } `` ### Data Fields `sentence`: Text as string. `label`: Label as string. ### Data Splits [More Information Needed] ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization https://www.spur.uzh.ch/en/departments/research/textgroup/ArchiMob.html ## Additional Information ### Licensing Information Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License ### Citation Information Scherrer, Y., T. Samardžić, E. Glaser (2019). "Digitising Swiss German -- How to process and study a polycentric spoken language". Language Resources and Evaluation. (First online) Scherrer, Y., T. Samardžić, E. Glaser (2019). "ArchiMob: Ein multidialektales Korpus schweizerdeutscher Spontansprache". Linguistik Online, 98(5), 425-454. https://doi.org/10.13092/lo.98.5947
bigbio/biology_how_why_corpus
2022-12-22T15:43:41.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
This dataset consists of 185 "how" and 193 "why" biology questions authored by a domain expert, with one or more gold answer passages identified in an undergraduate textbook. The expert was not constrained in any way during the annotation process, so gold answers might be smaller than a paragraph or span multiple paragraphs. This dataset was used for the question-answering system described in the paper “Discourse Complements Lexical Semantics for Non-factoid Answer Reranking” (ACL 2014).
@inproceedings{jansen-etal-2014-discourse, title = "Discourse Complements Lexical Semantics for Non-factoid Answer Reranking", author = "Jansen, Peter and Surdeanu, Mihai and Clark, Peter", booktitle = "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jun, year = "2014", address = "Baltimore, Maryland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P14-1092", doi = "10.3115/v1/P14-1092", pages = "977--986", }
null
2
3
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: BiologyHowWhyCorpus homepage: https://allenai.org/data/biology-how-why-corpus bigbio_pubmed: False bigbio_public: True bigbio_tasks: - QUESTION_ANSWERING --- # Dataset Card for BiologyHowWhyCorpus ## Dataset Description - **Homepage:** https://allenai.org/data/biology-how-why-corpus - **Pubmed:** False - **Public:** True - **Tasks:** QA This dataset consists of 185 "how" and 193 "why" biology questions authored by a domain expert, with one or more gold answer passages identified in an undergraduate textbook. The expert was not constrained in any way during the annotation process, so gold answers might be smaller than a paragraph or span multiple paragraphs. This dataset was used for the question-answering system described in the paper “Discourse Complements Lexical Semantics for Non-factoid Answer Reranking” (ACL 2014). ## Citation Information ``` @inproceedings{jansen-etal-2014-discourse, title = "Discourse Complements Lexical Semantics for Non-factoid Answer Reranking", author = "Jansen, Peter and Surdeanu, Mihai and Clark, Peter", booktitle = "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jun, year = "2014", address = "Baltimore, Maryland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P14-1092", doi = "10.3115/v1/P14-1092", pages = "977--986", } ```
bigbio/medhop
2022-12-22T15:45:26.000Z
[ "multilinguality:monolingual", "language:en", "license:cc-by-sa-3.0", "region:us" ]
bigbio
With the same format as WikiHop, this dataset is based on research paper abstracts from PubMed, and the queries are about interactions between pairs of drugs. The correct answer has to be inferred by combining information from a chain of reactions of drugs and proteins.
@article{welbl-etal-2018-constructing, title = Constructing Datasets for Multi-hop Reading Comprehension Across Documents, author = Welbl, Johannes and Stenetorp, Pontus and Riedel, Sebastian, journal = Transactions of the Association for Computational Linguistics, volume = 6, year = 2018, address = Cambridge, MA, publisher = MIT Press, url = https://aclanthology.org/Q18-1021, doi = 10.1162/tacl_a_00021, pages = 287--302, abstract = { Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently no resources exist to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence -- effectively performing multihop, alias multi-step, inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information; and providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 54.5 % on an annotated test set, compared to human performance at 85.0 %, leaving ample room for improvement. }
null
0
3
--- language: - en bigbio_language: - English license: cc-by-sa-3.0 multilinguality: monolingual bigbio_license_shortname: CC_BY_SA_3p0 pretty_name: MedHop homepage: http://qangaroo.cs.ucl.ac.uk/ bigbio_pubmed: True bigbio_public: True bigbio_tasks: - QUESTION_ANSWERING --- # Dataset Card for MedHop ## Dataset Description - **Homepage:** http://qangaroo.cs.ucl.ac.uk/ - **Pubmed:** True - **Public:** True - **Tasks:** QA With the same format as WikiHop, this dataset is based on research paper abstracts from PubMed, and the queries are about interactions between pairs of drugs. The correct answer has to be inferred by combining information from a chain of reactions of drugs and proteins. ## Citation Information ``` @article{welbl-etal-2018-constructing, title = Constructing Datasets for Multi-hop Reading Comprehension Across Documents, author = Welbl, Johannes and Stenetorp, Pontus and Riedel, Sebastian, journal = Transactions of the Association for Computational Linguistics, volume = 6, year = 2018, address = Cambridge, MA, publisher = MIT Press, url = https://aclanthology.org/Q18-1021, doi = 10.1162/tacl_a_00021, pages = 287--302, abstract = { Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently no resources exist to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence -- effectively performing multihop, alias multi-step, inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information; and providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 54.5 % on an annotated test set, compared to human performance at 85.0 %, leaving ample room for improvement. } ```
bigbio/n2c2_2014_deid
2022-12-22T15:45:57.000Z
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
bigbio
The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured two tracks. The first of these was the de-identification track focused on identifying protected health information (PHI) in longitudinal clinical narratives. TRACK 1: NER PHI\n HIPAA requires that patient medical records have all identifying information removed in order to protect patient privacy. There are 18 categories of Protected Health Information (PHI) identifiers of the patient or of relatives, employers, or household members of the patient that must be removed in order for a file to be considered de-identified. In order to de-identify the records, each file has PHI marked up. All PHI has an XML tag indicating its category and type, where applicable. For the purposes of this task, the 18 HIPAA categories have been grouped into 6 main categories and 25 sub categories
@article{stubbs2015automated, title = {Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1}, journal = {Journal of Biomedical Informatics}, volume = {58}, pages = {S11-S19}, year = {2015}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2015.06.007}, url = {https://www.sciencedirect.com/science/article/pii/S1532046415001173}, author = {Amber Stubbs and Christopher Kotfila and Özlem Uzuner} }
null
1
3
--- language: - en bigbio_language: - English license: other multilinguality: monolingual bigbio_license_shortname: DUA pretty_name: n2c2 2014 De-identification homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/ bigbio_pubmed: False bigbio_public: False bigbio_tasks: - NAMED_ENTITY_RECOGNITION --- # Dataset Card for n2c2 2014 De-identification ## Dataset Description - **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/ - **Pubmed:** False - **Public:** False - **Tasks:** NER The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured two tracks. The first of these was the de-identification track focused on identifying protected health information (PHI) in longitudinal clinical narratives. TRACK 1: NER PHI HIPAA requires that patient medical records have all identifying information removed in order to protect patient privacy. There are 18 categories of Protected Health Information (PHI) identifiers of the patient or of relatives, employers, or household members of the patient that must be removed in order for a file to be considered de-identified. In order to de-identify the records, each file has PHI marked up. All PHI has an XML tag indicating its category and type, where applicable. For the purposes of this task, the 18 HIPAA categories have been grouped into 6 main categories and 25 sub categories ## Citation Information ``` @article{stubbs2015automated, title = {Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1}, journal = {Journal of Biomedical Informatics}, volume = {58}, pages = {S11-S19}, year = {2015}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2015.06.007}, url = {https://www.sciencedirect.com/science/article/pii/S1532046415001173}, author = {Amber Stubbs and Christopher Kotfila and Özlem Uzuner} } ```
bigbio/pmc_patients
2022-12-22T15:46:17.000Z
[ "multilinguality:monolingual", "language:en", "license:cc-by-nc-sa-4.0", "arxiv:2202.13876", "region:us" ]
bigbio
This dataset is used for calculating the similarity between two patient descriptions.
@misc{zhao2022pmcpatients, title={PMC-Patients: A Large-scale Dataset of Patient Notes and Relations Extracted from Case Reports in PubMed Central}, author={Zhengyun Zhao and Qiao Jin and Sheng Yu}, year={2022}, eprint={2202.13876}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
1
3
--- language: - en bigbio_language: - English license: cc-by-nc-sa-4.0 multilinguality: monolingual bigbio_license_shortname: CC_BY_NC_SA_4p0 pretty_name: PMC-Patients homepage: https://github.com/zhao-zy15/PMC-Patients bigbio_pubmed: True bigbio_public: True bigbio_tasks: - SEMANTIC_SIMILARITY --- # Dataset Card for PMC-Patients ## Dataset Description - **Homepage:** https://github.com/zhao-zy15/PMC-Patients - **Pubmed:** True - **Public:** True - **Tasks:** STS This dataset is used for calculating the similarity between two patient descriptions. ## Citation Information ``` @misc{zhao2022pmcpatients, title={PMC-Patients: A Large-scale Dataset of Patient Notes and Relations Extracted from Case Reports in PubMed Central}, author={Zhengyun Zhao and Qiao Jin and Sheng Yu}, year={2022}, eprint={2202.13876}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
bigbio/psytar
2022-12-22T15:46:20.000Z
[ "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
bigbio
The "Psychiatric Treatment Adverse Reactions" (PsyTAR) dataset contains 891 drugs reviews posted by patients on "askapatient.com", about the effectiveness and adverse drug events associated with Zoloft, Lexapro, Cymbalta, and Effexor XR. This dataset can be used for (multi-label) sentence classification of Adverse Drug Reaction (ADR), Withdrawal Symptoms (WDs), Sign/Symptoms/Illness (SSIs), Drug Indications (DIs), Drug Effectiveness (EF), Drug Infectiveness (INF) and Others, as well as for recognition of 5 different types of named entity (in the categories ADRs, WDs, SSIs and DIs)
@article{Zolnoori2019, author = {Maryam Zolnoori and Kin Wah Fung and Timothy B. Patrick and Paul Fontelo and Hadi Kharrazi and Anthony Faiola and Yi Shuan Shirley Wu and Christina E. Eldredge and Jake Luo and Mike Conway and Jiaxi Zhu and Soo Kyung Park and Kelly Xu and Hamideh Moayyed and Somaieh Goudarzvand}, title = {A systematic approach for developing a corpus of patient reported adverse drug events: A case study for {SSRI} and {SNRI} medications}, journal = {Journal of Biomedical Informatics}, volume = {90}, year = {2019}, url = {https://doi.org/10.1016/j.jbi.2018.12.005}, doi = {10.1016/j.jbi.2018.12.005}, }
null
1
3
--- language: - en bigbio_language: - English license: cc-by-4.0 multilinguality: monolingual bigbio_license_shortname: CC_BY_4p0 pretty_name: PsyTAR homepage: https://www.askapatient.com/research/pharmacovigilance/corpus-ades-psychiatric-medications.asp bigbio_pubmed: False bigbio_public: False bigbio_tasks: - NAMED_ENTITY_RECOGNITION - TEXT_CLASSIFICATION --- # Dataset Card for PsyTAR ## Dataset Description - **Homepage:** https://www.askapatient.com/research/pharmacovigilance/corpus-ades-psychiatric-medications.asp - **Pubmed:** False - **Public:** False - **Tasks:** NER,TXTCLASS The "Psychiatric Treatment Adverse Reactions" (PsyTAR) dataset contains 891 drugs reviews posted by patients on "askapatient.com", about the effectiveness and adverse drug events associated with Zoloft, Lexapro, Cymbalta, and Effexor XR. This dataset can be used for (multi-label) sentence classification of Adverse Drug Reaction (ADR), Withdrawal Symptoms (WDs), Sign/Symptoms/Illness (SSIs), Drug Indications (DIs), Drug Effectiveness (EF), Drug Infectiveness (INF) and Others, as well as for recognition of 5 different types of named entity (in the categories ADRs, WDs, SSIs and DIs) ## Citation Information ``` @article{Zolnoori2019, author = {Maryam Zolnoori and Kin Wah Fung and Timothy B. Patrick and Paul Fontelo and Hadi Kharrazi and Anthony Faiola and Yi Shuan Shirley Wu and Christina E. Eldredge and Jake Luo and Mike Conway and Jiaxi Zhu and Soo Kyung Park and Kelly Xu and Hamideh Moayyed and Somaieh Goudarzvand}, title = {A systematic approach for developing a corpus of patient reported adverse drug events: A case study for {SSRI} and {SNRI} medications}, journal = {Journal of Biomedical Informatics}, volume = {90}, year = {2019}, url = {https://doi.org/10.1016/j.jbi.2018.12.005}, doi = {10.1016/j.jbi.2018.12.005}, } ```
bigbio/twadrl
2022-12-22T15:47:15.000Z
[ "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
bigbio
The TwADR-L dataset contains medical concepts written on social media (Twitter) mapped to how they are formally written in medical ontologies (SIDER 4). \
@inproceedings{limsopatham-collier-2016-normalising, title = "Normalising Medical Concepts in Social Media Texts by Learning Semantic Representation", author = "Limsopatham, Nut and Collier, Nigel", booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2016", address = "Berlin, Germany", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P16-1096", doi = "10.18653/v1/P16-1096", pages = "1014--1023", }
null
0
3
--- language: - en bigbio_language: - English license: cc-by-4.0 multilinguality: monolingual bigbio_license_shortname: CC_BY_4p0 pretty_name: TwADR-L homepage: https://zenodo.org/record/55013 bigbio_pubmed: False bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION - NAMED_ENTITY_DISAMBIGUATION --- # Dataset Card for TwADR-L ## Dataset Description - **Homepage:** https://zenodo.org/record/55013 - **Pubmed:** False - **Public:** True - **Tasks:** NER,NED The TwADR-L dataset contains medical concepts written on social media (Twitter) mapped to how they are formally written in medical ontologies (SIDER 4). ## Citation Information ``` @inproceedings{limsopatham-collier-2016-normalising, title = "Normalising Medical Concepts in Social Media Texts by Learning Semantic Representation", author = "Limsopatham, Nut and Collier, Nigel", booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2016", address = "Berlin, Germany", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P16-1096", doi = "10.18653/v1/P16-1096", pages = "1014--1023", } ```
bigbio/verspoor_2013
2022-12-22T15:47:37.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
This dataset contains annotations for a small corpus of full text journal publications on the subject of inherited colorectal cancer. It is suitable for Named Entity Recognition and Relation Extraction tasks. It uses the Variome Annotation Schema, a schema that aims to capture the core concepts and relations relevant to cataloguing and interpreting human genetic variation and its relationship to disease, as described in the published literature. The schema was inspired by the needs of the database curators of the International Society for Gastrointestinal Hereditary Tumours (InSiGHT) database, but is intended to have application to genetic variation information in a range of diseases.
@article{verspoor2013annotating, title = {Annotating the biomedical literature for the human variome}, author = { Verspoor, Karin and Jimeno Yepes, Antonio and Cavedon, Lawrence and McIntosh, Tara and Herten-Crabb, Asha and Thomas, Zo{"e} and Plazzer, John-Paul }, year = 2013, journal = {Database}, publisher = {Oxford Academic}, volume = 2013 }
null
0
3
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: Verspoor 2013 homepage: NA bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION - RELATION_EXTRACTION --- # Dataset Card for Verspoor 2013 ## Dataset Description - **Homepage:** NA - **Pubmed:** True - **Public:** True - **Tasks:** NER,RE This dataset contains annotations for a small corpus of full text journal publications on the subject of inherited colorectal cancer. It is suitable for Named Entity Recognition and Relation Extraction tasks. It uses the Variome Annotation Schema, a schema that aims to capture the core concepts and relations relevant to cataloguing and interpreting human genetic variation and its relationship to disease, as described in the published literature. The schema was inspired by the needs of the database curators of the International Society for Gastrointestinal Hereditary Tumours (InSiGHT) database, but is intended to have application to genetic variation information in a range of diseases. ## Citation Information ``` @article{verspoor2013annotating, title = {Annotating the biomedical literature for the human variome}, author = { Verspoor, Karin and Jimeno Yepes, Antonio and Cavedon, Lawrence and McIntosh, Tara and Herten-Crabb, Asha and Thomas, Zo{"e} and Plazzer, John-Paul }, year = 2013, journal = {Database}, publisher = {Oxford Academic}, volume = 2013 } ```
diltdicker/romance_books_32K
2022-11-15T07:37:05.000Z
[ "license:openrail", "region:us" ]
diltdicker
null
null
null
0
3
--- license: openrail --- Dataset Summary --- Collection of Romance Novels featuring `title`, `description`, and `genres`. Created with intention of building a "Romance Novel Generator." Data Fields --- - `id` : unique integer to id book in the dataset - `pub_month` : string indicating the month the book was published in the form: `YEAR_MONTH` - `title` : title of the book - `author` : comma-separated (`last-name, first-name`) of the author of book - `isbn13` : 13 digit number for the isbn of book (note not all books will have an isbn number) - `description` : text description of the book. May contain quoted lines, a brief teaser of the plot, etc... - `genres` : dictionary of all genres with 0 indicating the book is **NOT** tagged to that genre, and a 1 indicating that the book is tagged to that genre - additional fields are the all the individual genres exploded with respective 1 & 0 values Languages -- - en
Norod78/RickAndMorty-HorizontalMirror-blip-captions
2022-11-15T14:38:40.000Z
[ "task_categories:text-to-image", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:cc-by-nc-sa-4.0", "region:us" ]
Norod78
null
null
null
0
3
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 161499799.0 num_examples: 530 download_size: 161488169 dataset_size: 161499799.0 pretty_name: 'Rick and Morty, Horizontal Mirror, BLIP captions' size_categories: - n<1K tags: [] task_categories: - text-to-image license: cc-by-nc-sa-4.0 annotations_creators: - machine-generated language: - en language_creators: - other multilinguality: - monolingual --- # Dataset Card for "RickAndMorty-HorizontalMirror-blip-captions"
WINGNUS/ACL-OCL
2023-09-21T00:57:32.000Z
[ "task_categories:token-classification", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "research papers", "acl", "region:us" ]
WINGNUS
null
null
null
15
3
--- annotations_creators: [] language: - en language_creators: - found license: - mit multilinguality: - monolingual paperswithcode_id: acronym-identification pretty_name: acl-ocl-corpus size_categories: - 10K<n<100K source_datasets: - original tags: - research papers - acl task_categories: - token-classification task_ids: [] train-eval-index: - col_mapping: labels: tags tokens: tokens config: default splits: eval_split: test task: token-classification task_id: entity_extraction --- # Dataset Card for ACL Anthology Corpus [![License](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/) This repository provides full-text and metadata to the ACL anthology collection (80k articles/posters as of September 2022) also including .pdf files and grobid extractions of the pdfs. ## How is this different from what ACL anthology provides and what already exists? - We provide pdfs, full-text, references and other details extracted by grobid from the PDFs while [ACL Anthology](https://aclanthology.org/anthology+abstracts.bib.gz) only provides abstracts. - There exists a similar corpus call [ACL Anthology Network](https://clair.eecs.umich.edu/aan/about.php) but is now showing its age with just 23k papers from Dec 2016. ```python >>> import pandas as pd >>> df = pd.read_parquet('acl-publication-info.74k.parquet') >>> df acl_id abstract full_text corpus_paper_id pdf_hash ... number volume journal editor isbn 0 O02-2002 There is a need to measure word similarity whe... There is a need to measure word similarity whe... 18022704 0b09178ac8d17a92f16140365363d8df88c757d0 ... None None None None None 1 L02-1310 8220988 8d5e31610bc82c2abc86bc20ceba684c97e66024 ... None None None None None 2 R13-1042 Thread disentanglement is the task of separati... Thread disentanglement is the task of separati... 16703040 3eb736b17a5acb583b9a9bd99837427753632cdb ... None None None None None 3 W05-0819 In this paper, we describe a word alignment al... In this paper, we describe a word alignment al... 1215281 b20450f67116e59d1348fc472cfc09f96e348f55 ... None None None None None 4 L02-1309 18078432 011e943b64a78dadc3440674419821ee080f0de3 ... None None None None None ... ... ... ... ... ... ... ... ... ... ... ... 73280 P99-1002 This paper describes recent progress and the a... This paper describes recent progress and the a... 715160 ab17a01f142124744c6ae425f8a23011366ec3ee ... None None None None None 73281 P00-1009 We present an LFG-DOP parser which uses fragme... We present an LFG-DOP parser which uses fragme... 1356246 ad005b3fd0c867667118482227e31d9378229751 ... None None None None None 73282 P99-1056 The processes through which readers evoke ment... The processes through which readers evoke ment... 7277828 924cf7a4836ebfc20ee094c30e61b949be049fb6 ... None None None None None 73283 P99-1051 This paper examines the extent to which verb d... This paper examines the extent to which verb d... 1829043 6b1f6f28ee36de69e8afac39461ee1158cd4d49a ... None None None None None 73284 P00-1013 Spoken dialogue managers have benefited from u... Spoken dialogue managers have benefited from u... 10903652 483c818c09e39d9da47103fbf2da8aaa7acacf01 ... None None None None None [73285 rows x 21 columns] ``` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/shauryr/ACL-anthology-corpus - **Point of Contact:** shauryr@gmail.com ### Dataset Summary Dataframe with extracted metadata (table below with details) and full text of the collection for analysis : **size 489M** ### Languages en, zh and others ## Dataset Structure Dataframe ### Data Instances Each row is a paper from ACL anthology ### Data Fields | **Column name** | **Description** | | :---------------: | :---------------------------: | | `acl_id` | unique ACL id | | `abstract` | abstract extracted by GROBID | | `full_text` | full text extracted by GROBID | | `corpus_paper_id` | Semantic Scholar ID | | `pdf_hash` | sha1 hash of the pdf | | `numcitedby` | number of citations from S2 | | `url` | link of publication | | `publisher` | - | | `address` | Address of conference | | `year` | - | | `month` | - | | `booktitle` | - | | `author` | list of authors | | `title` | title of paper | | `pages` | - | | `doi` | - | | `number` | - | | `volume` | - | | `journal` | - | | `editor` | - | | `isbn` | - | ## Dataset Creation The corpus has all the papers in ACL anthology - as of September'22 ### Source Data - [ACL Anthology](aclanthology.org) - [Semantic Scholar](semanticscholar.org) # Additional Information ### Licensing Information The ACL OCL corpus is released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). By using this corpus, you are agreeing to its usage terms. ### Citation Information If you use this corpus in your research please use the following BibTeX entry: @Misc{acl-ocl, author = {Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen Kan}, title = {The ACL OCL Corpus: advancing Open science in Computational Linguistics}, howpublished = {arXiv}, year = {2022}, url = {https://huggingface.co/datasets/ACL-OCL/ACL-OCL-Corpus} } ### Acknowledgements We thank Semantic Scholar for providing access to the citation-related data in this corpus. ### Contributions Thanks to [@shauryr](https://github.com/shauryr), [Yanxia Qin](https://github.com/qolina) and [Benjamin Aw](https://github.com/Benjamin-Aw-93) for adding this dataset.
severo/mnist
2022-11-03T16:46:54.000Z
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-nist", "language:en", "license:mit", "region:us" ]
severo
The MNIST dataset consists of 70,000 28x28 black-and-white images in 10 classes (one for each digits), with 7,000 images per class. There are 60,000 training images and 10,000 test images.
@article{lecun2010mnist, title={MNIST handwritten digit database}, author={LeCun, Yann and Cortes, Corinna and Burges, CJ}, journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist}, volume={2}, year={2010} }
null
0
3
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-nist task_categories: - image-classification task_ids: - multi-class-image-classification paperswithcode_id: mnist pretty_name: MNIST dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: '5' 6: '6' 7: '7' 8: '8' 9: '9' config_name: mnist splits: - name: test num_bytes: 2916440 num_examples: 10000 - name: train num_bytes: 17470848 num_examples: 60000 download_size: 11594722 dataset_size: 20387288 --- # Dataset Card for MNIST ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://yann.lecun.com/exdb/mnist/ - **Repository:** - **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class. Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets). ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist). ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its label: ``` { 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>, 'label': 5 } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `label`: an integer between 0 and 9 representing the digit. ### Data Splits The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images. ## Dataset Creation ### Curation Rationale The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students. The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set. ### Source Data #### Initial Data Collection and Normalization The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field. #### Who are the source language producers? Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable. ### Annotations #### Annotation process The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them. #### Who are the annotators? Same as the source data creators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Chris Burges, Corinna Cortes and Yann LeCun ### Licensing Information MIT Licence ### Citation Information ``` @article{lecun2010mnist, title={MNIST handwritten digit database}, author={LeCun, Yann and Cortes, Corinna and Burges, CJ}, journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist}, volume={2}, year={2010} } ``` ### Contributions Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset.
Nerfgun3/bad_prompt
2022-11-19T23:43:47.000Z
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
Nerfgun3
null
null
null
905
3
--- language: - en license: creativeml-openrail-m thumbnail: "https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/main/bad_prompt_showcase.jpg" tags: - stable-diffusion - text-to-image - image-to-image inference: false --- # Negative Embedding / Textual Inversion <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/main/bad_prompt_showcase.jpg"/> ## Idea The idea behind this embedding was to somehow train the negative prompt as an embedding, thus unifying the basis of the negative prompt into one word or embedding. Side note: Embedding has proven to be very helpful for the generation of hands! :) ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder. **Please put the embedding in the negative prompt to get the right results!** For special negative tags such as "malformed sword", you still need to add them yourself. The negative embedding is trained on a basic skeleton for the negative prompt, which should provide a high-resolution image as a result. ### Version 1: Issue: Changing the style to much. To use it in the negative prompt: ```"bad_prompt"``` Personally, I would recommend to use my embeddings with a strength of 0.8 even the negative embeddings, like ```"(bad_prompt:0.8)"``` ### Version 2: With this version I tried to reduce the amount of vectors used, aswell as the issue with the changing artstyle. The newer version is still a work in progress, but its already way better than the first version. Its in files section! I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
mesolitica/translated-funpedia
2022-11-21T03:29:02.000Z
[ "region:us" ]
mesolitica
null
null
null
0
3
Entry not found