id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
declare-lab/cicero
2022-05-31T04:30:37.000Z
[ "license:mit", "arxiv:2203.13926", "arxiv:1710.03957", "arxiv:1902.00164", "arxiv:2004.04494", "region:us" ]
declare-lab
null
null
null
1
4
--- license: mit --- # Dataset Card for CICERO ## Description - **Homepage:** https://declare-lab.net/CICERO/ - **Repository:** https://github.com/declare-lab/CICERO - **Paper:** https://aclanthology.org/2022.acl-long.344/ - **arXiv:** https://arxiv.org/abs/2203.13926 ### Summary CICERO is a new dataset for dialogue reasoning with contextualized commonsense inference. It contains 53K inferences for five commonsense dimensions – cause, subsequent event, prerequisite, motivation, and emotional reaction collected from 5.6K dialogues. We design several generative and multi-choice answer selection tasks to show the usefulness of CICERO in dialogue reasoning. ### Supported Tasks Inference generation (NLG) and multi-choice answer selection (QA). ### Languages The text in the dataset is in English. The associated BCP-47 code is en. ## Dataset Structure ### Data Fields - **ID:** Dialogue ID with dataset indicator. - **Dialogue:** Utterances of the dialogue in a list. - **Target:** Target utterance. - **Question:** One of the five questions (inference types). - **Choices:** Five possible answer choices in a list. One of the answers is human written. The other four answers are machine-generated and selected through the Adversarial Filtering (AF) algorithm. - **Human Written Answer:** Index of the human written answer in a single element list. Index starts from 0. - **Correct Answers:** List of all correct answers indicated as plausible or speculatively correct by the human annotators. Includes the index of the human written answer. ### Data Instances An instance of the dataset is as the following: ``` { "ID": "daily-dialogue-1291", "Dialogue": [ "A: Hello , is there anything I can do for you ?", "B: Yes . I would like to check in .", "A: Have you made a reservation ?", "B: Yes . I am Belen .", "A: So your room number is 201 . Are you a member of our hotel ?", "B: No , what's the difference ?", "A: Well , we offer a 10 % charge for our members ." ], "Target": "Well , we offer a 10 % charge for our members .", "Question": "What subsequent event happens or could happen following the target?", "Choices": [ "For future discounts at the hotel, the listener takes a credit card at the hotel.", "The listener is not enrolled in a hotel membership.", "For future discounts at the airport, the listener takes a membership at the airport.", "For future discounts at the hotel, the listener takes a membership at the hotel.", "The listener doesn't have a membership to the hotel." ], "Human Written Answer": [ 3 ], "Correct Answers": [ 3 ] } ``` ### Data Splits The dataset contains 31,418 instances for training, 10,888 instances for validation and 10,898 instances for testing. ## Dataset Creation ### Curation Rationale The annotation process of CICERO can be found in the paper. ### Source Data The dialogues in CICERO are collected from three datasets - [DailyDialog](https://arxiv.org/abs/1710.03957), [DREAM](https://arxiv.org/abs/1902.00164), and [MuTual](https://arxiv.org/abs/2004.04494) ## Citation Information ``` @inproceedings{ghosal2022cicero, title={CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues}, author={Ghosal, Deepanway and Shen, Siqi and Majumder, Navonil and Mihalcea, Rada and Poria, Soujanya}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={5010--5028}, year={2022} } ```
blinoff/restaurants_reviews
2022-10-23T16:51:03.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:ru", "region:us" ]
blinoff
null
null
null
0
4
--- language: - ru multilinguality: - monolingual size_categories: - 10K<n<100K task_categories: - text-classification task_ids: - sentiment-classification --- ### Dataset Summary The dataset contains user reviews about restaurants. In total it contains 47,139 reviews. A review tagged with the <em>general</em> sentiment and sentiments on 3 aspects: <em>food, interior, service</em>. ### Data Fields Each sample contains the following fields: - **review_id**; - **general**; - **food**; - **interior**; - **service**; - **text** review text. ### Python ```python3 import pandas as pd df = pd.read_json('restaurants_reviews.jsonl', lines=True) df.sample(5) ```
lyakaap/laion2B-japanese-subset
2022-06-01T12:10:06.000Z
[ "region:us" ]
lyakaap
null
null
null
2
4
Entry not found
cmotions/Beatles_lyrics
2022-06-03T11:41:37.000Z
[ "language:en", "language modeling", "region:us" ]
cmotions
null
null
null
3
4
--- language: - en tags: - language modeling datasets: - full dataset - cleaned dataset --- ## Dataset overview This dataset contains all lyrics from songs produced by The Beatles, 180 in total. There a two splits available in the dictionary: - dataset_cleaned: contains all lyrics including Intro, Outro, Chorus tagging. - dataset_full: contains only lyrics without any tagging Each split contains the title, album, the lyrics for the song, the length of the lyrics field (tokens) and a number.
AlekseyKorshuk/thriller-books
2022-06-10T18:54:09.000Z
[ "region:us" ]
AlekseyKorshuk
null
null
null
2
4
Entry not found
nateraw/rendered-sst2
2022-10-25T10:32:21.000Z
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|sst2", "language:en", "license:unknown", "region:us"...
nateraw
null
null
null
0
4
--- annotations_creators: - machine-generated language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual pretty_name: Rendered SST-2 size_categories: - 1K<n<10K source_datasets: - extended|sst2 task_categories: - image-classification task_ids: - multi-class-image-classification --- # Rendered SST-2 The [Rendered SST-2 Dataset](https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md) from Open AI. Rendered SST2 is an image classification dataset used to evaluate the models capability on optical character recognition. This dataset was generated by rendering sentences in the Standford Sentiment Treebank v2 dataset. This dataset contains two classes (positive and negative) and is divided in three splits: a train split containing 6920 images (3610 positive and 3310 negative), a validation split containing 872 images (444 positive and 428 negative), and a test split containing 1821 images (909 positive and 912 negative).
BeIR/quora-generated-queries
2022-10-23T06:14:58.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
null
0
4
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
BeIR/cqadupstack-generated-queries
2022-10-23T06:15:48.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
null
0
4
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
s3prl/mini_voxceleb1
2022-06-19T18:49:50.000Z
[ "region:us" ]
s3prl
null
null
null
0
4
Entry not found
spencer/dialogsum_reformat
2022-06-20T22:27:54.000Z
[ "region:us" ]
spencer
null
null
null
1
4
Entry not found
codeparrot/github-jupyter
2022-10-25T09:30:04.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:muonolingual", "size_categories:unknown", "language:code", "license:other", "region:us" ]
codeparrot
null
null
null
4
4
--- annotations_creators: [] language_creators: - crowdsourced - expert-generated language: - code license: - other multilinguality: - muonolingual size_categories: - unknown source_datasets: [] task_categories: - text-generation task_ids: - language-modeling --- # GitHub Jupyter Dataset ## Dataset Description The dataset was extracted from Jupyter Notebooks on BigQuery. ## Licenses Each example has the license of its associated repository. There are in total 15 licenses: ```python [ 'mit', 'apache-2.0', 'gpl-3.0', 'gpl-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-3.0', 'lgpl-2.1', 'bsd-2-clause', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'isc', 'artistic-2.0' ] ```
AlekseyKorshuk/books
2022-06-25T12:11:02.000Z
[ "region:us" ]
AlekseyKorshuk
null
null
null
1
4
Entry not found
Nexdata/Multi-class_Fashion_Item_Detection_Data
2023-08-31T02:45:27.000Z
[ "region:us" ]
Nexdata
null
null
null
3
4
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/Multi-class_Fashion_Item_Detection_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1057?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 144,810 Images Multi-class Fashion Item Detection Data. In this dataset, 19,968 images of male and 124,842 images of female were included. The Fashion Items were divided into 4 parts based on the season (spring, autumn, summer and winter). In terms of annotation, rectangular bounding boxes were adopted to annotate fashion items. The data can be used for tasks such as fashion items detection, fashion recommendation and other tasks. For more details, please refer to the link: https://www.nexdata.ai/datasets/1057?source=Huggingface ### Supported Tasks and Leaderboards object-detection, computer-vision: The dataset can be used to train a model for object detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
davidberg/inflation
2022-06-29T21:57:10.000Z
[ "license:apache-2.0", "region:us" ]
davidberg
null
null
null
0
4
--- license: apache-2.0 ---
davidberg/sentiment-reviews
2022-06-29T22:38:11.000Z
[ "license:postgresql", "region:us" ]
davidberg
null
null
null
0
4
--- license: postgresql ---
launch/ampere
2022-11-09T01:57:52.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
launch
null
null
null
0
4
--- annotations_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual task_categories: - text-classification task_ids: [] pretty_name: AMPERE --- # Dataset Card for AMPERE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Structure](#dataset-structure) - [Dataset Creation](#dataset-creation) ## Dataset Description This dataset is released together with our NAACL 2019 Paper "[`Argument Mining for Understanding Peer Reviews`](https://aclanthology.org/N19-1219/)". If you find our work useful, please cite: ``` @inproceedings{hua-etal-2019-argument, title = "Argument Mining for Understanding Peer Reviews", author = "Hua, Xinyu and Nikolov, Mitko and Badugu, Nikhil and Wang, Lu", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", month = jun, year = "2019", address = "Minneapolis, Minnesota", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N19-1219", doi = "10.18653/v1/N19-1219", pages = "2131--2137", } ``` This dataset includes 400 scientific peer reviews collected from ICLR 2018 hosted at the Openreview platform. Each review is segmented into multiple propositions. We include the original untokenized text for each proposition. Each proposition is labeled as one of the following types: - **evaluation**: a proposition that is not objectively verifiable and does not require any action to be performed, such as qualitative judgement and interpretation of the paper, e.g. "The paper shows nice results on a number of small tasks." - **request**: a proposition that is not objectively verifiable and suggests a course of action to be taken, such as recommendation and suggestion for new experiments, e.g. "I would really like to see how the method performs without this hack." - **fact**: a proposition that is verifiable with objective evidence, such as mathematical conclusion and common knowledge of the field, e.g. "This work proposes a dynamic weight update scheme." - **quote**: a quote from the paper or another source, e.g. "The author wrote 'where r is lower bound of feature norm'." - **reference**: a proposition that refers to an objective evidence, such as URL link and citation, e.g. "see MuseGAN (Dong et al), MidiNet (Yang et al), etc." - **non-arg**: a non-argumentative discourse unit that does not contribute to the overall agenda of the review, such as greetings, metadata, and clarification questions, e.g. "Aha, now I understand." ## Dataset Structure The dataset is partitioned into train/val/test sets. Each set is uploaded as a jsonl format. Each line contains the following elements: - `doc_id` (str): a unique id for review document - `text` (list[str]): a list of segmented propositions - `labels` (list[str]): a list of labels corresponding to the propositions An example looks as follows. ``` { "doc_id": "H1WORsdlG", "text": [ "This paper addresses the important problem of understanding mathematically how GANs work.", "The approach taken here is to look at GAN through the lense of the scattering transform.", "Unfortunately the manuscrit submitted is very poorly written.", "Introduction and flow of thoughts is really hard to follow.", "In method sections, the text jumps from one concept to the next without proper definitions.", "Sorry I stopped reading on page 3.", "I suggest to rewrite this work before sending it to review.", "Among many things: - For citations use citep and not citet to have () at the right places.", "- Why does it seems -> Why does it seem etc.", ], "labels": [ 'fact', 'fact', 'evaluation', 'evaluation', 'evaluation', 'evaluation', 'request', 'request', 'request', ] } ``` ## Dataset Creation For human annotators, they will be asked to first read the above definitions and controversial cases carefully. The dataset to be annotated consists of 400 reviews partitioned in 20 batches. Each annotator will follow the following steps for annotation: - Step 1: Open a review file with a text editor. The unannotated review file contains only one line, please separate it into multiple lines with each line corresponding to one single proposition. Repeat the above actions on all 400 reviews. - Step 2: Based on the segmented units, label the type for each proposition. Start labeling at the end of each file with the marker "## Labels:". Indicate the line number of the proposition first, then annotate the type, e.g. "1. evaluation" for the first proposition. Repeat the above actions on all 400 reviews. A third annotator then resolves the disagreements between the two annotators on both segmentation and proposition type.
AswiN037/tamil-question-answering-dataset
2022-07-01T07:53:56.000Z
[ "license:afl-3.0", "region:us" ]
AswiN037
null
null
null
2
4
--- license: afl-3.0 --- this dataset contains 5 columns context, question, answer_start, answer_text, source | Column | Description | | :------------ |:---------------:| | context | A general small paragraph in tamil language | | question | question framed form the context | | answer_text | text span that extracted from context | | answer_start | index of answer_text | | source | who framed this context, question, answer pair | source team KBA => (Karthi, Balaji, Azeez) these people manually created CHAII =>a kaggle competition XQA => multilingual QA dataset
benschill/brain-tumor-collection
2022-07-04T08:26:59.000Z
[ "license:pddl", "region:us" ]
benschill
This dataset is intended as a test case for classification tasks (4 different kinds of brain xrays). The dataset consists of almost 1400 JPEG images grouped into two splits - training and validation. Each split contains 4 categories labeled as n0~n3, each corresponding to a cancer result of the mrt. | Label | Xray Category | Train Images | Validation Images | | ----- | --------------------- | ------------ | ----------------- | | n0 | glioma_tumor | 826 | 100 | | n1 | meningioma_tumor | 822 | 115 | | n2 | pituitary_tumor | 827 | 74 | | n3 | no_tumor | 395 | 105 |
@misc{kaggle-brain-tumor-classification, title={Kaggle: Brain Tumor Classification (MRI)}, howpublished={\\url{https://www.kaggle.com/datasets/sartajbhuvaji/brain-tumor-classification-mri?resource=download}}, note = {Accessed: 2022-06-30}, }
null
1
4
--- license: pddl ---
dgrnd4/stanford_dog_dataset
2022-07-01T11:27:56.000Z
[ "license:afl-3.0", "region:us" ]
dgrnd4
null
null
null
2
4
--- license: afl-3.0 ---
shahidul034/text_generation_model_data16
2022-07-04T16:57:40.000Z
[ "region:us" ]
shahidul034
null
null
null
0
4
Entry not found
s3prl/iemocap_split
2022-07-10T02:26:18.000Z
[ "region:us" ]
s3prl
null
null
null
0
4
Entry not found
Paul/hatecheck-spanish
2022-07-05T10:27:07.000Z
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:es", "license:cc-by-4.0", "arxiv:2206.09917", "regi...
Paul
null
null
null
3
4
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - es license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Spanish HateCheck size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - hate-speech-detection --- # Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** paul@rewire.online ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
embedding-data/flickr30k_captions_quintets
2022-08-02T01:59:48.000Z
[ "language:en", "license:mit", "region:us" ]
embedding-data
null
null
null
0
4
--- license: mit language: - en paperswithcode_id: embedding-data/flickr30k-captions pretty_name: flickr30k-captions --- # Dataset Card for "flickr30k-captions" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Usage Example](#usage-example) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://shannon.cs.illinois.edu/DenotationGraph/](https://shannon.cs.illinois.edu/DenotationGraph/) - **Repository:** [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) - **Paper:** [https://transacl.org/ojs/index.php/tacl/article/view/229/33](https://transacl.org/ojs/index.php/tacl/article/view/229/33) - **Point of Contact:** [Peter Young](pyoung2@illinois.edu), [Alice Lai](aylai2@illinois.edu), [Micah Hodosh](mhodosh2@illinois.edu), [Julia Hockenmaier](juliahmr@illinois.edu) ### Dataset Summary We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions. Disclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value": ``` {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} ... {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/flickr30k-captions") ``` The dataset is loaded as a `DatasetDict` has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 31783 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) #### Who are the source language producers? [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Annotations #### Annotation process [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) #### Who are the annotators? [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Personal and Sensitive Information [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Discussion of Biases [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Other Known Limitations [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ## Additional Information ### Dataset Curators [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Licensing Information [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Citation Information [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Contributions Thanks to [Peter Young](pyoung2@illinois.edu), [Alice Lai](aylai2@illinois.edu), [Micah Hodosh](mhodosh2@illinois.edu), [Julia Hockenmaier](juliahmr@illinois.edu) for adding this dataset.
pnr-svc/Turkish-Multiclass-Dataset
2022-07-20T21:40:17.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:tr", ...
pnr-svc
The dataset, prepared in Turkish, includes 10.000 tests, 10.000 validations and 33000 train data. The data is composed of customer comments and created from e-commerce sites.
----Turkish Multiclass Dataset----
null
2
4
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - tr license: - other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification - multi-label-classification pretty_name: 'Turkish-Multiclass-Dataset' train-eval-index: - config: TurkishMulticlassDataset task: text-classification task_id: multi_class_classification splits: eval_split: test col_mapping: text: text label: target --- # Dataset Card for "Turkish-Multiclass-Dataset" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/PnrSvc/Turkish-Multiclass-Dataset] - **Repository:**[https://github.com/PnrSvc/Turkish-Multiclass-Dataset] - **Size of downloaded dataset files:** - **Size of the generated dataset:** ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 53,000 validations, 53,000 tests and 160600 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] #### turkish-dataset-v1 - **Size of downloaded dataset files:** - **Size of the generated dataset:** ### Data Fields The data fields are the same among all splits. #### turkish-dataset-v-v1 - `text`: a `string` feature. - `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0). ### Data Splits | |train |validation|test | |----|--------:|---------:|---------:| |Data| 15000 | 5000| 5000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to for adding this dataset.
gorkaartola/SC-train-valid-test_AURORA-Gold-SDG-True-Positives
2023-03-21T18:23:12.000Z
[ "region:us" ]
gorkaartola
null
null
null
0
4
Entry not found
biglam/contentious_contexts
2022-08-01T17:02:11.000Z
[ "task_categories:text-classification", "task_ids:sentiment-scoring", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets...
biglam
This dataset contains extracts from historical Dutch newspapers which have been containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time
@misc{ContentiousContextsCorpus2021, author = {Cultural AI}, title = {Contentious Contexts Corpus}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/cultural-ai/ConConCor}}, }
null
2
4
--- annotations_creators: - expert-generated - crowdsourced language: - nl language_creators: - machine-generated license: - cc-by-2.0 multilinguality: - monolingual pretty_name: Contentious Contexts Corpus size_categories: - 1K<n<10K source_datasets: - original tags: - newspapers - historic - dutch - problematic - ConConCor task_categories: - text-classification task_ids: - sentiment-scoring - multi-label-classification --- # Dataset Card for Contentious Contexts Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [ConConCor](https://github.com/cultural-ai/ConConCor) - **Repository:** [ConConCor](https://github.com/cultural-ai/ConConCor) - **Paper:** [N/A] - **Leaderboard:** [N/A] - **Point of Contact:** [Jacco van Ossenbruggen](https://github.com/jrvosse) **Note** One can also find a Datasheet produced by the creators of this dataset as a [PDF document](https://github.com/cultural-ai/ConConCor/blob/main/Dataset/DataSheet.pdf) ### Dataset Summary This dataset contains extracts from historical Dutch newspapers containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time ### Supported Tasks and Leaderboards - `text-classification`: This dataset can be used for tracking how the meanings of words in different contexts have changed and become contentious over time ### Languages The text in the dataset is in Dutch. The responses are available in both English and Dutch. Suggestions, where present, are only in Dutch. The associated BCP-47 code is `nl` ## Dataset Structure ### Data Instances ``` { 'extract_id': 'H97', 'text': 'en waardoor het eerste doel wordt voorbijgestreefd om voor den 5D5c5Y 5d-5@5j5g5d5e5Z5V5V5c een speciale eigen werkingssfeer te scheppen.Intusschen is het', 'target': '5D 5c5Y5d-5@5j5g5d5e5Z5V5V5c', 'annotator_responses_english': [ {'id': 'unknown_2a', 'response': 'Not contentious'}, {'id': 'unknown_2b', 'response': 'Contentious according to current standards'}, {'id': 'unknown_2c', 'response': "I don't know"}, {'id': 'unknown_2d', 'response': 'Contentious according to current standards'}, {'id': 'unknown_2e', 'response': 'Not contentious'}, {'id': 'unknown_2f', 'response': "I don't know"}, {'id': 'unknown_2g', 'response': 'Not contentious'}], 'annotator_responses_dutch': [ {'id': 'unknown_2a', 'response': 'Niet omstreden'}, {'id': 'unknown_2b', 'response': 'Omstreden naar huidige maatstaven'}, {'id': 'unknown_2c', 'response': 'Weet ik niet'}, {'id': 'unknown_2d', 'response': 'Omstreden naar huidige maatstaven'}, {'id': 'unknown_2e', 'response': 'Niet omstreden'}, {'id': 'unknown_2f', 'response': 'Weet ik niet'}, {'id': 'unknown_2g', 'response': 'Niet omstreden'}], 'annotator_suggestions': [ {'id': 'unknown_2a', 'suggestion': ''}, {'id': 'unknown_2b', 'suggestion': 'ander ras nodig'}, {'id': 'unknown_2c', 'suggestion': 'personen van ander ras'}, {'id': 'unknown_2d', 'suggestion': ''}, {'id': 'unknown_2e', 'suggestion': ''}, {'id': 'unknown_2f', 'suggestion': ''}, {'id': 'unknown_2g', 'suggestion': 'ras'}] } ``` ### Data Fields |extract_id|text|target|annotator_responses_english|annotator_responses_dutch|annotator_suggestions| |---|---|---|---|---|---| |Unique identifier|Text|Target phrase or word|Response(translated to English)|Response in Dutch|Suggestions, if present| ### Data Splits Train: 2720 ## Dataset Creation ### Curation Rationale > Cultural heritage institutions recognise the problem of language use in their collections. The cultural objects in archives, libraries, and museums contain words and phrases that are inappropriate in modern society but were used broadly back in times. Such words can be offensive and discriminative. In our work, we use the term "contentious" to refer to all (potentially) inappropriate or otherwise sensitive words. For example, words suggestive of some (implicit or explicit) bias towards or against something. The National Archives of the Netherlands stated that they "explore the possibility of explaining language that was acceptable and common in the past and providing it with contemporary alternatives", meanwhile "keeping the original descriptions [with contentious words], because they give an idea of the time in which they were made or included in the collection". There is a page on the institution website where people can report "offensive language". ### Source Data #### Initial Data Collection and Normalization > The queries were run on OCR'd versions of the Europeana Newspaper collection, as provided by the KB National Library of the Netherlands. We limited our pool to text categorised as "article", thus excluding other types of texts such as advertisements and family notices. We then only focused our sample on the 6 decades between 1890-01-01 and 1941-12-31, as this is the period available in the Europeana newspaper corpus. The dataset represents a stratified sample set over target word, decade, and newspaper issue distribution metadata. For the final set of extracts for annotation, we gave extracts sampling weights proportional to their actual probabilities, as estimated from the initial set of extracts via trigram frequencies, rather than sampling uniformly. #### Who are the source language producers? [N/A] ### Annotations #### Annotation process > The annotation process included 3 stages: pilot annotation, expert annotation, and crowdsourced annotation on the "Prolific" platform. All stages required the participation of Dutch speakers. The pilot stage was intended for testing the annotation layout, the instructions clarity, the number of sentences provided as context, the survey questions, and the difficulty of the task in general. The Dutch-speaking members of the Cultural AI Lab were asked to test the annotation process and give their feedback anonymously using Google Sheets. Six volunteers contributed to the pilot stage, each annotating the same 40 samples where either a context of 3 or 5 sentences surrounding the term were given. An individual annotation sheet had a table layout with 4 options to choose for every sample > - 'Omstreden'(Contentious) > - 'Niet omstreden'(Not contentious) > - 'Weet ik niet'(I don't know) > - 'Onleesbare OCR'(Illegible OCR)</br> 2 open fields > - 'Andere omstreden termen in de context'(Other contentious terms in the context) > - 'Notities'(Notes)</br> and the instructions in the header. The rows were the samples with the highlighted words, the tickboxes for every option, and 2 empty cells for the open questions. The obligatory part of the annotation was to select one of the 4 options for every sample. Finding other contentious terms in the given sample, leaving notes, and answering 4 additional open questions at the end of the task were optional. Based on the received feedback and the answers to the open questions in the pilot study, the following decisions were made regarding the next, experts' annotation stage: > - The annotation layout was built in Google Forms as a questionnaire instead of the table layout in Google Sheets to make the data collection and analysis faster as the number of participants would increase; > - The context window of 5 sentences per sample was found optimal; > - The number of samples per annotator was increased to 50; > - The option 'Omstreden' (Contentious) was changed to 'Omstreden naar huidige maatstaven' ('Contentious according to current standards') to clarify that annotators should judge contentiousness of the word's use in context from today's perspective; > - The annotation instruction was edited to clarify 2 points: (1) that annotators while judging contentiousness should take into account not only a bolded word but also the context surrounding it, and (2) if a word seems even slightly contentious to an annotator, they should choose the option 'Omstreden naar huidige maatstaven' (Contentious according to current standards); > - The non-required field for every sample 'Notities' (Notes) was removed as there was an open question at the end of the annotation, where participants could leave their comments; > - Another open question was added at the end of the annotation asking how much time it took to complete the annotation. #### Who are the annotators? Volunteers and Expert annotators ### Personal and Sensitive Information [N/A] ## Considerations for Using the Data ## Accessing the annotations Each example text has multiple annotations. These annotations may not always agree. There are various approaches one could take to calculate agreement, including a majority vote, rating some annotators more highly, or calculating a score based on the 'votes' of annotators. Since there are many ways of doing this, we have not implemented this as part of the dataset loading script. An example of how one could generate an "OCR quality rating" based on the number of times an annotator labelled an example with `Illegible OCR`: ```python from collections import Counter def calculate_ocr_score(example): annotator_responses = [response['response'] for response in example['annotator_responses_english']] counts = Counter(annotator_responses) bad_ocr_ratings = counts.get("Illegible OCR") if bad_ocr_ratings is None: bad_ocr_ratings = 0 return round(1 - bad_ocr_ratings/len(annotator_responses),3) dataset = dataset.map(lambda example: {"ocr_score":calculate_ocr_score(example)}) ``` To take the majority vote (or return a tie) based on whether a example is labelled contentious or not: ```python def most_common_vote(example): annotator_responses = [response['response'] for response in example['annotator_responses_english']] counts = Counter(annotator_responses) contentious_count = counts.get("Contentious according to current standards") if not contentious_count: contentious_count = 0 not_contentious_count = counts.get("Not contentious") if not not_contentious_count: not_contentious_count = 0 if contentious_count > not_contentious_count: return "contentious" if contentious_count < not_contentious_count: return "not_contentious" if contentious_count == not_contentious_count: return "tied" ``` ### Social Impact of Dataset This dataset can be used to see how words change in meaning over time ### Discussion of Biases > Due to the nature of the project, some examples used in this documentation may be shocking or offensive. They are provided only as an illustration or explanation of the resulting dataset and do not reflect the opinions of the project team or their organisations. Since this project was explicitly created to help assess bias, it should be used primarily in the context of assess bias, and methods for detecting bias. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Cultural AI](https://github.com/cultural-ai) ### Licensing Information CC-BY ### Citation Information ``` @misc{ContentiousContextsCorpus2021, author = {Cultural AI}, title = {Contentious Contexts Corpus}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/cultural-ai/ConConCor}}, } ```
wmt/wmt21
2022-07-31T18:12:55.000Z
[ "region:us" ]
wmt
null
null
null
0
4
Entry not found
KaranChand/atcosim_input
2022-08-01T15:33:47.000Z
[ "region:us" ]
KaranChand
null
null
null
0
4
Entry not found
rungalileo/emotion
2022-08-04T04:58:18.000Z
[ "region:us" ]
rungalileo
null
null
null
0
4
Entry not found
rungalileo/sst2
2022-10-05T22:48:35.000Z
[ "region:us" ]
rungalileo
null
null
null
0
4
Entry not found
NbAiLab/norwegian-paws-x
2023-08-18T11:26:40.000Z
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "task_ids:multi-input-text-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:machi...
NbAiLab
Norwegian PAWS-X, Bokmaal and Nynorsk machine-translated versions of PAWS-X. PAWS-X, a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. English language is available by default. All translated pairs are sourced from examples in PAWS-Wiki. For further details, see the accompanying paper: PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification (https://arxiv.org/abs/1908.11828) NOTE: There might be some missing or wrong labels in the dataset and we have replaced them with -1.
@InProceedings{pawsx2019emnlp, title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}}, author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason}, booktitle = {Proc. of EMNLP}, year = {2019} }
null
0
4
--- annotations_creators: - expert-generated - machine-generated language_creators: - machine-generated language: - nb - nn license: - cc-by-4.0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - extended|other-paws task_categories: - text-classification task_ids: - semantic-similarity-classification - semantic-similarity-scoring - text-scoring - multi-input-text-classification pretty_name: 'NbAiLab/norwegian-paws-x' --- # Dataset Card for Norwegian PAWS-X ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [NB AiLab](https://ai.nb.no/) - **Repository:** [Norwegian PAWS-X Repository](#) - **Point of Contact:** [ai-lab@nb.no](mailto:ai-lab@nb.no) ### Dataset Summary Norwegian PAWS-X is an extension of the PAWS-X dataset. PAWS-X is a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. The Norwegian PAWS-X dataset has machine-translated versions of the original PAWS-X dataset into Norwegian Bokmål and Nynorsk. ### Languages - Norwegian Bokmål (`nb`) - Norwegian Nynorsk (`nn`) ## Dataset Structure ### Data Instances Each instance includes a pair of sentences in Norwegian along with a binary label indicating whether the sentences are paraphrases of each other. ### Data Fields - `id`: An identifier for each example (int32) - `sentence1`: The first sentence in Norwegian (string) - `sentence2`: The second sentence in Norwegian (string) - `label`: Binary label, where '1' indicates the sentences are paraphrases and '0' indicates they are not (class_label: '0', '1') ### Data Splits The dataset is divided into training, validation, and test sets. The exact numbers of instances in each split will be as per the original PAWS-X dataset. ## Dataset Creation ### Curation Rationale Norwegian PAWS-X was created to extend the PAWS paraphrase identification task to the Norwegian language, including both Bokmål and Nynorsk standards. This promotes multilingual and cross-lingual research in paraphrase identification. ### Source Data The source data consists of human-translated PAWS pairs in six languages. For the Norwegian PAWS-X dataset, these pairs were translated into Norwegian Bokmål and Nynorsk using FAIR’s No Language Left Behind 3.3B parameters model. ### Annotations The dataset retains the original PAWS labels, which were created through a combination of expert and machine-generated annotations. ### Personal and Sensitive Information There is no known personal or sensitive information in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset helps in promoting the development of NLP technologies in Norwegian. ### Other Known Limitations There may be potential issues related to the translation quality, as the translations were generated using a machine translation model. ## Additional Information ### Dataset Curators The dataset was curated by researcher Javier de la Rosa. ### Licensing Information Original PAWS-X License: - The dataset may be freely used for any purpose, with acknowledgment of Google LLC as the data source being appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset. Norwegian PAWS-X License: - CC BY 4.0
intfloat/simlm-msmarco
2022-08-11T09:25:24.000Z
[ "region:us" ]
intfloat
null
null
null
0
4
Entry not found
skorkmaz88/iris
2022-08-10T21:52:52.000Z
[ "region:us" ]
skorkmaz88
null
null
null
0
4
Entry not found
pustozerov/crema_d_diarization
2022-08-16T08:09:57.000Z
[ "region:us" ]
pustozerov
null
null
null
0
4
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Contributions](#contributions) annotations_creators: - no-annotation language: - en language_creators: - crowdsourced license: - afl-3.0 multilinguality: - monolingual pretty_name: Crema D Diarization size_categories: - 10M<n<100M source_datasets: [] tags: [] task_categories: - audio-classification - automatic-speech-recognition - voice-activity-detection task_ids: - audio-emotion-recognition - speaker-identification ### Contributions Thanks to [@EvgeniiPustozerov](https://github.com/EvgeniiPustozerov) for adding this dataset.
nsarker/plantspecies-demo
2022-08-12T15:07:38.000Z
[ "region:us" ]
nsarker
null
null
null
1
4
Entry not found
galatolo/TeTIm-Eval
2022-12-15T14:58:24.000Z
[ "task_categories:text-to-image", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "curated", "high-quality", "text-to-image", "evaluation", "valid...
galatolo
Text To Image Evaluation (TeTIm-Eval)
TODO
null
1
4
--- annotations_creators: - expert-generated language: - en language_creators: - expert-generated license: - cc multilinguality: - monolingual pretty_name: TeTIm-Eval size_categories: - 1K<n<10K source_datasets: - original tags: - curated - high-quality - text-to-image - evaluation - validation task_categories: - text-to-image task_ids: [] --- # TeTIm-Eval
cjvt/komet
2022-11-27T16:34:59.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:sl", "license:cc-by-nc-sa-4.0", "metaphor-classification", "metaphor-frame-classification", "multiword-expression-detec...
cjvt
KOMET 1.0 is a hand-annotated corpus for metaphorical expressions which contains about 200,000 words from Slovene journalistic, fiction and on-line texts. To annotate metaphors in the corpus an adapted and modified procedure of the MIPVU protocol (Steen et al., 2010: A method for linguistic metaphor identification: From MIP to MIPVU, https://www.benjamins.com/catalog/celcr.14) was used. The lexical units (words) whose contextual meanings are opposed to their basic meanings are considered metaphor-related words. The basic and contextual meaning for each word in the corpus was identified using the Dictionary of the standard Slovene Language. The corpus was annotated for the metaphoric following relations: indirect metaphor (MRWi), direct metaphor (MRWd), borderline case (WIDLI) and metaphor signal (MFlag). In addition, the corpus introduces a new 'frame' tag, which gives information about the concept to which it refers.
@InProceedings{antloga2020komet, title = {Korpus metafor KOMET 1.0}, author={Antloga, \v{S}pela}, booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student abstracts)}, year={2020}, pages={167-170} }
null
0
4
--- annotations_creators: - expert-generated language_creators: - found language: - sl license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: [] task_categories: - token-classification task_ids: [] pretty_name: KOMET tags: - metaphor-classification - metaphor-frame-classification - multiword-expression-detection --- # Dataset Card for KOMET ### Dataset Summary KOMET 1.0 is a hand-annotated Slovenian corpus of metaphorical expressions which contains about 200 000 words (across 13 963 sentences) from Slovene journalistic, fiction and online texts. ### Supported Tasks and Leaderboards Metaphor detection, metaphor type classification, metaphor frame classification. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ``` { 'document_name': 'komet49.div.xml', 'idx': 60, 'idx_paragraph': 24, 'idx_sentence': 1, 'sentence_words': ['Morda', 'zato', ',', 'ker', 'resnice', 'nočete', 'sprejeti', ',', 'in', 'nadaljujete', 'po', 'svoje', '.'], 'met_type': [{'type': 'MRWi', 'word_indices': [10]}], 'met_frame': [{'type': 'spatial_orientation', 'word_indices': [10]}, {'type': 'adverbial_phrase', 'word_indices': [10, 11]}]} ``` The sentence comes from the document `komet49.div.xml`, is the 60th sentence in the document and is the 1st sentence inside the 24th paragraph in the document. The word "po" is annotated as an indirect metaphor-related word (`MRWi`). The phrase "po svoje" is annotated with the frame "adverbial phrase" and the word "po" is additionally annotated with the frame "spatial_orientation". ### Data Fields - `document_name`: a string containing the name of the document in which the sentence appears; - `idx`: a uint32 containing the index of the sentence inside its document; - `idx_paragraph`: a uint32 containing the index of the paragraph in which the sentence appears; - `idx_sentence`: a uint32 containing the index of the sentence inside its paragraph; containing the consecutive number of the paragraph inside the current news article; - `sentence_words`: words in the sentence; - `met_type`: metaphors in the sentence, marked by their type and word indices; - `met_frame`: metaphor frames in the sentence, marked by their type (frame name) and word indices. ## Dataset Creation The texts were sampled from the Corpus of Slovene youth literature MAKS (journalistic, fiction and online texts). Initially, words whose meaning deviates from their primary meaning in the Dictionary of the standard Slovene Language were marked as metaphors. Then, their type was determined, i.e. whether they are an indirect (MRWi), direct (MRWd), borderline (WIDLI) metaphor or a metaphor flag (signal, marker; MFlag). For more information, please check out the paper (which is in Slovenian language) or contact the dataset author ## Additional Information ### Dataset Curators Špela Antloga. ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information ``` @InProceedings{antloga2020komet, title = {Korpus metafor KOMET 1.0}, author={Antloga, \v{S}pela}, booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student abstracts)}, year={2020}, pages={167-170} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
yhavinga/cnn_dailymail_dutch
2022-08-20T12:39:20.000Z
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:nl", "license:apache-2.0", "region:us" ]
yhavinga
CNN/DailyMail non-anonymized summarization dataset, translated to Dutch with ccmatrix. There are two features: - article: text of news article, used as the document to be summarized - highlights: joined text of highlights with <s> and </s> around each highlight, which is the target summary
@article{DBLP:journals/corr/SeeLM17, author = {Abigail See and Peter J. Liu and Christopher D. Manning}, title = {Get To The Point: Summarization with Pointer-Generator Networks}, journal = {CoRR}, volume = {abs/1704.04368}, year = {2017}, url = {http://arxiv.org/abs/1704.04368}, archivePrefix = {arXiv}, eprint = {1704.04368}, timestamp = {Mon, 13 Aug 2018 16:46:08 +0200}, biburl = {https://dblp.org/rec/bib/journals/corr/SeeLM17}, bibsource = {dblp computer science bibliography, https://dblp.org} } @inproceedings{hermann2015teaching, title={Teaching machines to read and comprehend}, author={Hermann, Karl Moritz and Kocisky, Tomas and Grefenstette, Edward and Espeholt, Lasse and Kay, Will and Suleyman, Mustafa and Blunsom, Phil}, booktitle={Advances in neural information processing systems}, pages={1693--1701}, year={2015} }
null
1
4
--- annotations_creators: - no-annotation language_creators: - found language: - nl license: - apache-2.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - summarization task_ids: - news-articles-summarization paperswithcode_id: cnn-daily-mail-1 pretty_name: CNN / Daily Mail train-eval-index: - config: 3.0.0 task: summarization task_id: summarization splits: eval_split: test col_mapping: article: text highlights: target --- # Dataset Card for CNN Dailymail Dutch 🇳🇱🇧🇪 Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description Note: the data below is from the English version at [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail). - **Homepage:** - **Repository:** [CNN / DailyMail Dataset repository](https://github.com/abisee/cnn-dailymail) - **Paper:** [Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf), [Get To The Point: Summarization with Pointer-Generator Networks](https://www.aclweb.org/anthology/K16-1028.pdf) - **Leaderboard:** [Papers with Code leaderboard for CNN / Dailymail Dataset](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail) - **Point of Contact:** [Abigail See](mailto:abisee@stanford.edu) ### Dataset Summary The CNN / DailyMail Dutch 🇳🇱🇧🇪 Dataset is an English-language dataset translated to Dutch containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering. *This dataset currently (Aug '22) has a single config, which is config `3.0.0` of [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) translated to Dutch with [yhavinga/t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi).* ### Supported Tasks and Leaderboards - 'summarization': [Version 3.0.0 of the CNN / DailyMail Dataset](https://www.aclweb.org/anthology/K16-1028.pdf) can be used to train a model for abstractive and extractive summarization ([Version 1.0.0](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf) was developed for machine reading and comprehension and abstractive question answering). The model performance is measured by how high the output summary's [ROUGE](https://huggingface.co/metrics/rouge) score for a given article is when compared to the highlight as written by the original article author. [Zhong et al (2020)](https://www.aclweb.org/anthology/2020.acl-main.552.pdf) report a ROUGE-1 score of 44.41 when testing a model trained for extractive summarization. See the [Papers With Code leaderboard](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail) for more models. ### Languages The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data. ## Dataset Structure ### Data Instances For each instance, there is a string for the article, a string for the highlights, and a string for the id. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=cnn_dailymail&config=3.0.0) to explore more examples. ``` {'id': '0054d6d30dbcad772e20b22771153a2a9cbeaf62', 'article': '(CNN) -- An American woman died aboard a cruise ship that docked at Rio de Janeiro on Tuesday, the same ship on which 86 passengers previously fell ill, according to the state-run Brazilian news agency, Agencia Brasil. The American tourist died aboard the MS Veendam, owned by cruise operator Holland America. Federal Police told Agencia Brasil that forensic doctors were investigating her death. The ship's doctors told police that the woman was elderly and suffered from diabetes and hypertension, according the agency. The other passengers came down with diarrhea prior to her death during an earlier part of the trip, the ship's doctors said. The Veendam left New York 36 days ago for a South America tour.' 'highlights': 'The elderly woman suffered from diabetes and hypertension, ship's doctors say .\nPreviously, 86 passengers had fallen ill on the ship, Agencia Brasil says .'} ``` The average token count for the articles and the highlights are provided below: | Feature | Mean Token Count | | ---------- | ---------------- | | Article | 781 | | Highlights | 56 | ### Data Fields - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from - `article`: a string containing the body of the news article - `highlights`: a string containing the highlight of the article as written by the article author ### Data Splits The CNN/DailyMail dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for Version 3.0.0 of the dataset. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 287,113 | | Validation | 13,368 | | Test | 11,490 | ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015. The code for the original data collection is available at <https://github.com/deepmind/rc-data>. The articles were downloaded using archives of <www.cnn.com> and <www.dailymail.co.uk> on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <https://cs.nyu.edu/~kcho/DMQA/>. An updated version of the code that does not anonymize the data is available at <https://github.com/abisee/cnn-dailymail>. Hermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them. #### Who are the source language producers? The text was written by journalists at CNN and the Daily Mail. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Version 3.0 is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop models that can summarize long paragraphs of text in one or two sentences. This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated. ### Discussion of Biases [Bordia and Bowman (2019)](https://www.aclweb.org/anthology/N19-3002.pdf) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'. Because the articles were written by and for people in the US and the UK, they will likely present specifically US and UK perspectives and feature events that are considered relevant to those populations during the time that the articles were published. ### Other Known Limitations News articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article [(Kryściński et al, 2019)](https://www.aclweb.org/anthology/D19-1051.pdf). [Chen et al (2016)](https://www.aclweb.org/anthology/P16-1223.pdf) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors. It should also be noted that machine-generated summarizations, even when extractive, may differ in truth values when compared to the original articles. ## Additional Information ### Dataset Curators The data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions. The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <https://github.com/abisee/cnn-dailymail>. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040. ### Licensing Information The CNN / Daily Mail dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @inproceedings{see-etal-2017-get, title = "Get To The Point: Summarization with Pointer-Generator Networks", author = "See, Abigail and Liu, Peter J. and Manning, Christopher D.", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-1099", doi = "10.18653/v1/P17-1099", pages = "1073--1083", abstract = "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.", } ``` ``` @inproceedings{DBLP:conf/nips/HermannKGEKSB15, author={Karl Moritz Hermann and Tomás Kociský and Edward Grefenstette and Lasse Espeholt and Will Kay and Mustafa Suleyman and Phil Blunsom}, title={Teaching Machines to Read and Comprehend}, year={2015}, cdate={1420070400000}, pages={1693-1701}, url={http://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend}, booktitle={NIPS}, crossref={conf/nips/2015} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding the English version of this dataset. The dataset was translated on Cloud TPU compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/).
SLPL/naab-raw
2022-11-03T06:34:28.000Z
[ "task_categories:fill-mask", "task_categories:text-generation", "task_ids:language-modeling", "task_ids:masked-language-modeling", "multilinguality:monolingual", "language:fa", "license:mit", "arxiv:2208.13486", "region:us" ]
SLPL
Huge corpora of textual data are always known to be a crucial need for training deep models such as transformer-based ones. This issue is emerging more in lower resource languages - like Farsi. We propose naab, the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. This corpus contains the raw (uncleaned) version of it.
@misc{https://doi.org/10.48550/arxiv.2208.13486, doi = {10.48550/ARXIV.2208.13486}, url = {https://arxiv.org/abs/2208.13486}, author = {Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {naab: A ready-to-use plug-and-play corpus for Farsi}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} }
null
5
4
--- language: - fa license: - mit multilinguality: - monolingual task_categories: - fill-mask - text-generation task_ids: - language-modeling - masked-language-modeling pretty_name: naab-raw (raw version of the naab corpus) --- # naab-raw (raw version of the naab corpus) _[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_ ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Changelog](#changelog) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Contribution Guideline](#contribution-guideline) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL) - **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486) - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com) ### Dataset Summary This is the raw (uncleaned) version of the [naab](https://huggingface.co/datasets/SLPL/naab) corpus. You can use also customize our [preprocess script](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess) and make your own cleaned corpus. This repository is a hub for all Farsi corpora. Feel free to add your corpus following the [contribution guidelines](#contribution-guideline). You can download the dataset by the command below: ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab-raw") ``` If you wanted to download a specific part of the corpus you can set the config name to the specific corpus name: ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab-raw", "CC-fa") ``` ### Supported Tasks and Leaderboards This corpus can be used for training all language models trained by Masked Language Modeling (MLM) or any other self-supervised objective. - `language-modeling` - `masked-language-modeling` ### Changelog It's crucial to log changes on the projects which face changes periodically. Please refer to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md) for more details. ## Dataset Structure Each row of the dataset will look like something like the below: ```json { 'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.", } ``` + `text` : the textual paragraph. ### Data Splits This corpus contains only a split (the `train` split). ## Dataset Creation ### Curation Rationale Here are some details about each part of this corpus. #### CC-fa The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata, and text extractions. We use the Farsi part of it here. #### W2C The W2C stands for Web to Corpus and it contains several corpera. We contain the Farsi part of it in this corpus. ### Contribution Guideline In order to add your dataset, you should follow the below steps and make a pull request in order to be merged with the _naab-raw_: 1. Add your dataset to `_CORPUS_URLS` in `naab-raw.py` like: ```python ... "DATASET_NAME": "LINK_TO_A_PUBLIC_DOWNLOADABLE_FILE.txt" ... ``` 2. Add a log of your changes to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md). 3. Add some minor descriptions to the [Curation Rationale](#curation-rationale) under a subsection with your dataset name. ### Personal and Sensitive Information Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP. We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful. ## Additional Information ### Dataset Curators + Sadra Sabouri (Sharif University of Technology) + Elnaz Rahmati (Sharif University of Technology) ### Licensing Information mit ### Citation Information ``` @article{sabouri2022naab, title={naab: A ready-to-use plug-and-play corpus for Farsi}, author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein}, journal={arXiv preprint arXiv:2208.13486}, year={2022} } ``` DOI:[https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486). ### Contributions Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset. ### Keywords + Farsi + Persian + raw text + پیکره فارسی + پیکره متنی + آموزش مدل زبانی
projecte-aina/WikiCAT_ca
2023-09-13T12:39:56.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:auromatically-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-sa-3.0", "region:us" ]
projecte-aina
WikiCAT: Text Classification Catalan dataset from the Viquipedia
null
1
4
--- YAML tags: annotations_creators: - auromatically-generated language_creators: - found language: - ca license: - cc-by-sa-3.0 multilinguality: - monolingual pretty_name: wikicat_ca size_categories: - unknown source_datasets: [] task_categories: - text-classification task_ids: - multi-class-classification --- # WikiCAT_ca: Catalan Text Classification dataset ## Dataset Description - **Paper:** - **Point of Contact:** carlos.rodriguez1@bsc.es **Repository** https://github.com/TeMU-BSC/WikiCAT ### Dataset Summary WikiCAT_ca is a Catalan corpus for thematic Text Classification tasks. It is created automagically from Wikipedia and Wikidata sources, and contains 13201 articles from the Viquipedia classified under 13 different categories. This dataset was developed by BSC TeMU as part of the AINA project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus. ### Supported Tasks and Leaderboards Text classification, Language Model ### Languages `ca-ES` ## Dataset Structure ### Data Instances Two json files, one for each split. ### Data Fields We used a simple model with the article text and associated labels, without further metadata. #### Example: <pre> {"version": "1.1.0", "data": [ { 'sentence': ' Celsius és conegut com l\'inventor de l\'escala centesimal del termòmetre. Encara que aquest instrument és un invent molt antic, la història de la seva gradació és molt més capritxosa. Durant el segle xvi era graduat com "fred" col·locant-lo (...)', 'label': 'Ciència' }, . . . ] } </pre> #### Labels 'Ciència_i_Tecnologia', 'Dret', 'Economia', 'Enginyeria', 'Entreteniment', 'Esport', 'Filosofia', 'Història', 'Humanitats', 'Matemàtiques', 'Música', 'Política', 'Religió' ### Data Splits * dev_ca.json: 2484 label-document pairs * train_ca.json: 9907 label-document pairs ## Dataset Creation ### Methodology “Category” starting pages are chosen to represent the topics in each language. We extract, for each category, the main pages, as well as the subcategories ones, and the individual pages under this first level. For each page, the "summary" provided by Wikipedia is also extracted as the representative text. ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The source data are thematic categories in the different Wikipedias #### Who are the source language producers? ### Annotations #### Annotation process Automatic annotation #### Who are the annotators? [N/A] ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset We hope this corpus contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases We are aware that this data might contain biases. We have not applied any steps to reduce their impact. ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Licensing Information This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International</a>. ### Contributions [N/A]
nbtpj/bionlp2021MAS
2022-08-27T15:37:33.000Z
[ "license:afl-3.0", "region:us" ]
nbtpj
MEDIQA @ NAACL-BioNLP 2021 -- Task 2: Multi-answer summarization https://sites.google.com/view/mediqa2021 Training Data The MEDIQA-AnS Dataset could be used for training. Participants can use available external resources such as existing medical QA datasets. Validation and Test Sets The original answers are generated by the consumer health question answering system CHiQA which searches for answers from only trustworthy medical information sources. The summaries are manually created by medical experts. The validation set contains 192 answers associated with 50 questions. Each question has at least two answers and their summaries. For each question, we provide two types of summaries: extractive and abstractive. We encourage the use of all types of summarization approaches (extractive, abstractive, and hybrid). We will also provide the questions in the official test set as they can be used as additional inputs for the summarization models. The test set contains 303 answers associated with 80 questions. For each test question, we provide two reference summaries: extractive and abstractive. ** In the official competiton, we used the abstractive reference summaries to evaluate the abstractive systems and the extractive summaries to evaluate the extractive systems (only extractive summaries were used on AIcrowd-Task2).
null
null
0
4
--- license: afl-3.0 --- ## MEDIQUA2012-MAS task source data is available [here](https://github.com/abachaa/MEDIQA2021/tree/main/Task2) des: 1. data features Multiple Answer Summarization with: * key: key of each question * question: question * text: merge all text of all answers (if it is train-split, a merge of article and section part) * sum\_abs: abstractive multiple answer summarization * sum\_ext: extractive multiple answer summarization 2. train\_article / train\_sec Same structure with train, but: * train: text: merge all text of all answers (if it is train-split, a merge of article and section part) * train\_article: text is a merge of all subanswers 's articles * train\_sec: text is a merge of all subanswers 's sections
BDas/EnglishNLPDataset
2022-08-27T11:13:01.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", ...
BDas
The dataset, prepared in English, includes 10.000 tests, 10.000 validations and 80000 train data. The data is composed of customer comments and created from e-commerce sites.
----EnglishNLPDataset----
null
0
4
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - other multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification - multi-label-classification pretty_name: 'EnglishNLPDataset' --- # Dataset Card for "EnglishNLPDataset" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/BihterDass/EnglishTextClassificationDataset] - **Repository:** [https://github.com/BihterDass/EnglishTextClassificationDataset] - **Size of downloaded dataset files:** 8.71 MB - **Size of the generated dataset:** 8.71 MB ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] #### english-dataset-v1 - **Size of downloaded dataset files:** 8.71 MB - **Size of the generated dataset:** 8.71 MB ### Data Fields The data fields are the same among all splits. #### english-dataset-v-v1 - `text`: a `string` feature. - `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0). ### Data Splits | |train |validation|test | |----|--------:|---------:|---------:| |Data| 80000 | 10000 | 10000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset.
mrm8488/tass-2019
2022-09-02T15:55:56.000Z
[ "region:us" ]
mrm8488
null
null
null
0
4
Entry not found
mrm8488/go_emotions-es-mt
2022-10-20T19:23:36.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:go_emotions"...
mrm8488
null
null
null
4
4
--- annotations_creators: - crowdsourced language_creators: - found language: - es license: - apache-2.0 multilinguality: - monolingual size_categories: - 100K<n<1M - 10K<n<100K source_datasets: - go_emotions task_categories: - text-classification task_ids: - multi-class-classification - multi-label-classification pretty_name: GoEmotions tags: - emotion --- # GoEmotions Spanish ## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [GoEmotions](https://huggingface.co/datasets/sst2) dataset. #### For more information check the official [Model Card](https://huggingface.co/datasets/go_emotions)
victor/autotrain-data-satellite-image-classification
2022-09-05T09:30:13.000Z
[ "task_categories:image-classification", "region:us" ]
victor
null
null
null
1
4
--- task_categories: - image-classification --- # AutoTrain Dataset for project: satellite-image-classification ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project satellite-image-classification. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<256x256 CMYK PIL image>", "target": 0 }, { "image": "<256x256 CMYK PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=1, names=['cloudy'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1200 | | valid | 300 |
dwisaji/indonesia-telecomunication-sentiment-dataset
2022-09-16T11:36:02.000Z
[ "license:mit", "region:us" ]
dwisaji
null
null
null
1
4
--- license: mit --- Dataset Contain sentimen for Indonesia Communication Industry. Source from Twitter and manually annotated in prodigy spacy
TheGreatRambler/mm2_world
2022-11-11T08:08:15.000Z
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:multilingual", "license:cc-by...
TheGreatRambler
null
null
null
1
4
--- language: - multilingual license: - cc-by-nc-sa-4.0 multilinguality: - multilingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - other - object-detection - text-retrieval - token-classification - text-generation task_ids: [] pretty_name: Mario Maker 2 super worlds tags: - text-mining --- # Mario Maker 2 super worlds Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 super worlds dataset consists of 289 thousand super worlds from Nintendo's online service totaling around 13.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 super worlds dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_world", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'world_id': 'c96012bef256ba6b_20200513204805563301', 'worlds': 1, 'levels': 5, 'planet_type': 0, 'created': 1589420886, 'unk1': [some binary data], 'unk5': 3, 'unk6': 1, 'unk7': 1, 'thumbnail': [some binary data] } ``` Each row is a unique super world denoted by the `world_id` created by the player denoted by the `pid`. Thumbnails are binary PNGs. `unk1` describes the super world itself, including the world map, but its format is unknown as of now. You can also download the full dataset. Note that this will download ~13.5GB: ```python ds = load_dataset("TheGreatRambler/mm2_world", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'world_id': 'c96012bef256ba6b_20200513204805563301', 'worlds': 1, 'levels': 5, 'planet_type': 0, 'created': 1589420886, 'unk1': [some binary data], 'unk5': 3, 'unk6': 1, 'unk7': 1, 'thumbnail': [some binary data] } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of the user who created this super world| |world_id|string|World ID| |worlds|int|Number of worlds| |levels|int|Number of levels| |planet_type|int|Planet type, enum below| |created|int|UTC timestamp of when this super world was created| |unk1|bytes|Unknown| |unk5|int|Unknown| |unk6|int|Unknown| |unk7|int|Unknown| |thumbnail|bytes|The thumbnail, as a JPEG binary| |thumbnail_url|string|The old URL of this thumbnail| |thumbnail_size|int|The filesize of this thumbnail| |thumbnail_filename|string|The filename of this thumbnail| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: ```python SuperWorldPlanetType = { 0: "Earth", 1: "Moon", 2: "Sand", 3: "Green", 4: "Ice", 5: "Ringed", 6: "Red", 7: "Spiral" } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset consists of super worlds from many different Mario Maker 2 players globally and as such harmful depictions could be present in their super world thumbnails.
cjvt/rsdo4_en_sl
2022-09-20T17:38:33.000Z
[ "task_categories:translation", "task_categories:text2text-generation", "task_categories:text-generation", "annotations_creators:expert-generated", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:translation", "size_categories:100K<n<1M", "language:en", "language:sl...
cjvt
The RSDO4 parallel corpus of English-Slovene and Slovene-English translation pairs was collected as part of work package 4 of the Slovene in the Digital Environment project. It contains texts collected from public institutions and texts submitted by individual donors through the text collection portal created within the project. The corpus consists of 964433 translation pairs (extracted from standard translation formats (TMX, XLIFF) or manually aligned) in randomized order which can be used for machine translation training.
@misc{rsdo4_en_sl, title = {Parallel corpus {EN}-{SL} {RSDO4} 1.0}, author = {Repar, Andra{\v z} and Lebar Bajec, Iztok}, url = {http://hdl.handle.net/11356/1457}, year = {2021} }
null
1
4
--- annotations_creators: - expert-generated - found language: - en - sl language_creators: - crowdsourced license: - cc-by-sa-4.0 multilinguality: - translation pretty_name: RSDO4 en-sl parallel corpus size_categories: - 100K<n<1M source_datasets: [] tags: - parallel data - rsdo task_categories: - translation - text2text-generation - text-generation task_ids: [] --- # Dataset Card for RSDO4 en-sl parallel corpus ### Dataset Summary The RSDO4 parallel corpus of English-Slovene and Slovene-English translation pairs was collected as part of work package 4 of the Slovene in the Digital Environment project. It contains texts collected from public institutions and texts submitted by individual donors through the text collection portal created within the project. The corpus consists of 964433 translation pairs (extracted from standard translation formats (TMX, XLIFF) or manually aligned) in randomized order which can be used for machine translation training. ### Supported Tasks and Leaderboards Machine translation. ### Languages English, Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ``` { 'en_seq': 'the total value of its assets exceeds EUR 30000000000;', 'sl_seq': 'skupna vrednost njenih sredstev presega 30000000000 EUR' } ``` ### Data Fields - `en_seq`: a string containing the English sequence; - `sl_seq`: a string containing the Slovene sequence. ## Additional Information ### Dataset Curators Andraž Repar and Iztok Lebar Bajec. ### Licensing Information CC BY-SA 4.0. ### Citation Information ``` @misc{rsdo4_en_sl, title = {Parallel corpus {EN}-{SL} {RSDO4} 1.0}, author = {Repar, Andra{\v z} and Lebar Bajec, Iztok}, url = {http://hdl.handle.net/11356/1457}, year = {2021} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
Adapting/chinese_biomedical_NER_dataset
2022-09-21T18:21:15.000Z
[ "license:mit", "region:us" ]
Adapting
null
null
null
2
4
--- license: mit --- # 1 Source Source: https://github.com/alibaba-research/ChineseBLUE # 2 Definition of the tagset ```python tag_set = [ 'B_手术', 'I_疾病和诊断', 'B_症状', 'I_解剖部位', 'I_药物', 'B_影像检查', 'B_药物', 'B_疾病和诊断', 'I_影像检查', 'I_手术', 'B_解剖部位', 'O', 'B_实验室检验', 'I_症状', 'I_实验室检验' ] tag2id = lambda tag: tag_set.index(tag) id2tag = lambda id: tag_set[id] ``` # 3 Citation To use this dataset in your work please cite: Ningyu Zhang, Qianghuai Jia, Kangping Yin, Liang Dong, Feng Gao, Nengwei Hua. Conceptualized Representation Learning for Chinese Biomedical Text Mining ``` @article{zhang2020conceptualized, title={Conceptualized Representation Learning for Chinese Biomedical Text Mining}, author={Zhang, Ningyu and Jia, Qianghuai and Yin, Kangping and Dong, Liang and Gao, Feng and Hua, Nengwei}, journal={arXiv preprint arXiv:2008.10813}, year={2020} } ```
StonyBrookNLP/tellmewhy
2022-09-29T13:05:59.000Z
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
StonyBrookNLP
null
null
null
0
4
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation task_ids: [] paperswithcode_id: null pretty_name: TellMeWhy --- # Dataset Card for NewsCommentary ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://stonybrooknlp.github.io/tellmewhy/ - **Repository:** https://github.com/StonyBrookNLP/tellmewhy - **Paper:** https://aclanthology.org/2021.findings-acl.53/ - **Leaderboard:** None - **Point of Contact:** [Yash Kumar Lal](mailto:ylal@cs.stonybrook.edu) ### Dataset Summary TellMeWhy is a large-scale crowdsourced dataset made up of more than 30k questions and free-form answers concerning why characters in short narratives perform the actions described. ### Supported Tasks and Leaderboards The dataset is designed to test why-question answering abilities of models when bound by local context. ### Languages English ## Dataset Structure ### Data Instances A typical data point consists of a story, a question and a crowdsourced answer to that question. Additionally, the instance also indicates whether the question's answer would be implicit or if it is explicitly stated in text. If applicable, it also contains Likert scores (-2 to 2) about the answer's grammaticality and validity in the given context. ``` { "narrative":"Cam ordered a pizza and took it home. He opened the box to take out a slice. Cam discovered that the store did not cut the pizza for him. He looked for his pizza cutter but did not find it. He had to use his chef knife to cut a slice.", "question":"Why did Cam order a pizza?", "original_sentence_for_question":"Cam ordered a pizza and took it home.", "narrative_lexical_overlap":0.3333333333, "is_ques_answerable":"Not Answerable", "answer":"Cam was hungry.", "is_ques_answerable_annotator":"Not Answerable", "original_narrative_form":[ "Cam ordered a pizza and took it home.", "He opened the box to take out a slice.", "Cam discovered that the store did not cut the pizza for him.", "He looked for his pizza cutter but did not find it.", "He had to use his chef knife to cut a slice." ], "question_meta":"rocstories_narrative_41270_sentence_0_question_0", "helpful_sentences":[ ], "human_eval":false, "val_ann":[ ], "gram_ann":[ ] } ``` ### Data Fields - `question_meta` - Unique meta for each question in the corpus - `narrative` - Full narrative from ROCStories. Used as the context with which the question and answer are associated - `question` - Why question about an action or event in the narrative - `answer` - Crowdsourced answer to the question - `original_sentence_for_question` - Sentence in narrative from which question was generated - `narrative_lexical_overlap` - Unigram overlap of answer with the narrative - `is_ques_answerable` - Majority judgment by annotators on whether an answer to this question is explicitly stated in the narrative. If "Not Answerable", it is part of the Implicit-Answer questions subset, which is harder for models. - `is_ques_answerable_annotator` - Individual annotator judgment on whether an answer to this question is explicitly stated in the narrative. - `original_narrative_form` - ROCStories narrative as an array of its sentences - `human_eval` - Indicates whether a question is a specific part of the test set. Models should be evaluated for their answers on these questions using the human evaluation suite released by the authors. They advocate for this human evaluation to be the correct way to track progress on this dataset. - `val_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is valid given the question and context. Empty arrays exist for cases where the human_eval flag is False. - `gram_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is grammatical. Empty arrays exist for cases where the human_eval flag is False. ### Data Splits The data is split into training, valiudation, and test sets. | Train | Valid | Test | | ------ | ----- | ----- | | 23964 | 2992 | 3563 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data ROCStories corpus (Mostafazadeh et al, 2016) #### Initial Data Collection and Normalization ROCStories was used to create why-questions related to actions and events in the stories. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process Amazon Mechanical Turk workers were provided a story and an associated why-question, and asked to answer. Three answers were collected for each question. For a small subset of questions, the quality of answers was also validated in a second round of annotation. This smaller subset should be used to perform human evaluation of any new models built for this dataset. #### Who are the annotators? Amazon Mechanical Turk workers ### Personal and Sensitive Information None ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Evaluation To evaluate progress on this dataset, the authors advocate for human evaluation and release a suite with the required settings [here](https://github.com/StonyBrookNLP/tellmewhy). Once inference on the test set has been completed, please filter out the answers on which human evaluation needs to be performed by selecting the questions (one answer per question, deduplication might be needed) in the test set where the `human_eval` flag is set to `True`. This subset can then be used to complete the requisite evaluation on TellMeWhy. ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{lal-etal-2021-tellmewhy, title = "{T}ell{M}e{W}hy: A Dataset for Answering Why-Questions in Narratives", author = "Lal, Yash Kumar and Chambers, Nathanael and Mooney, Raymond and Balasubramanian, Niranjan", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.53", doi = "10.18653/v1/2021.findings-acl.53", pages = "596--610", } ``` ### Contributions Thanks to [@yklal95](https://github.com/ykl7) for adding this dataset.
rubrix/mini_imdb
2022-09-28T08:30:46.000Z
[ "region:us" ]
rubrix
null
null
null
0
4
Entry not found
jmercat/risk_biased_dataset
2023-08-01T19:08:31.000Z
[ "license:cc-by-nc-4.0", "region:us" ]
jmercat
Dataset of pre-processed samples from a small portion of the Waymo Open Motion Data for our risk-biased prediction task.
@InProceedings{NiMe:2022, author = {Haruki Nishimura, Jean Mercat, Blake Wulfe, Rowan McAllister}, title = {RAP: Risk-Aware Prediction for Robust Planning}, booktitle = {Proceedings of the 2022 IEEE International Conference on Robot Learning (CoRL)}, month = {December}, year = {2022}, address = {Grafton Road, Auckland CBD, Auckland 1010}, url = {}, }
null
0
4
--- license: cc-by-nc-4.0 --- The code is provided under a Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Under the license, the code is provided royalty free for non-commercial purposes only. The code may be covered by patents and if you want to use the code for commercial purposes, please contact us for a different license. This dataset is a pre-processed small sample of the Waymo Open Motion Dataset intended for illustration purposes only.
merkalo-ziri/vsosh2022
2022-09-29T11:02:34.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:ru", "license:other", "region:us" ]
merkalo-ziri
null
null
null
0
4
--- annotations_creators: - found language: - ru language_creators: - found license: - other multilinguality: - monolingual pretty_name: vsosh_dataset size_categories: - 1K<n<10K source_datasets: [] tags: [] task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
Divyanshu/IE_SemParse
2023-07-13T18:35:10.000Z
[ "task_categories:text2text-generation", "task_ids:parsing", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:as", "language:bn", "language:gu", "language:hi", "lang...
Divyanshu
IE-SemParse is an Inter-bilingual Seq2seq Semantic parsing dataset for 11 distinct Indian languages
@misc{aggarwal2023evaluating, title={Evaluating Inter-Bilingual Semantic Parsing for Indian Languages}, author={Divyanshu Aggarwal and Vivek Gupta and Anoop Kunchukuttan}, year={2023}, eprint={2304.13005}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
0
4
--- annotations_creators: - machine-generated language_creators: - machine-generated language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te license: - cc0-1.0 multilinguality: - multilingual pretty_name: IE-SemParse size_categories: - 1M<n<10M source_datasets: - original task_categories: - text2text-generation task_ids: - parsing --- # Dataset Card for "IE-SemParse" ## Table of Contents - [Dataset Card for "IE-SemParse"](#dataset-card-for-ie-semparse) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset usage](#dataset-usage) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Human Verification Process](#human-verification-process) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** <https://github.com/divyanshuaggarwal/IE-SemParse> - **Paper:** [Evaluating Inter-Bilingual Semantic Parsing for Indian Languages](https://arxiv.org/abs/2304.13005) - **Point of Contact:** [Divyanshu Aggarwal](mailto:divyanshuggrwl@gmail.com) ### Dataset Summary IE-SemParse is an InterBilingual Semantic Parsing Dataset for eleven major Indic languages that includes Assamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’), Malayalam (‘ml’), Marathi (‘mr’), Odia (‘or’), Punjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi (‘hi’), and Bengali (‘bn’). ### Supported Tasks and Leaderboards **Tasks:** Inter-Bilingual Semantic Parsing **Leaderboards:** Currently there is no Leaderboard for this dataset. ### Languages - `Assamese (as)` - `Bengali (bn)` - `Gujarati (gu)` - `Kannada (kn)` - `Hindi (hi)` - `Malayalam (ml)` - `Marathi (mr)` - `Oriya (or)` - `Punjabi (pa)` - `Tamil (ta)` - `Telugu (te)` ... <!-- Below is the dataset split given for `hi` dataset. ```python DatasetDict({ train: Dataset({ features: ['utterance', 'logical form', 'intent'], num_rows: 36000 }) test: Dataset({ features: ['utterance', 'logical form', 'intent'], num_rows: 3000 }) validation: Dataset({ features: ['utterance', 'logical form', 'intent'], num_rows: 1500 }) }) ``` --> ## Dataset usage Code snippet for using the dataset using datasets library. ```python from datasets import load_dataset dataset = load_dataset("Divyanshu/IE_SemParse") ``` ## Dataset Creation Machine translation of 3 multilingual semantic Parsing datasets english dataset to 11 listed Indic Languages. ### Curation Rationale [More information needed] ### Source Data [mTOP dataset](https://aclanthology.org/2021.eacl-main.257/) [multilingualTOP dataset](https://github.com/awslabs/multilingual-top) [multi-ATIS++ dataset](https://paperswithcode.com/paper/end-to-end-slot-alignment-and-recognition-for) #### Initial Data Collection and Normalization [Detailed in the paper](https://arxiv.org/abs/2304.13005) #### Who are the source language producers? [Detailed in the paper](https://arxiv.org/abs/2304.13005) #### Human Verification Process [Detailed in the paper](https://arxiv.org/abs/2304.13005) ## Considerations for Using the Data ### Social Impact of Dataset [Detailed in the paper](https://arxiv.org/abs/2304.13005) ### Discussion of Biases [Detailed in the paper](https://arxiv.org/abs/2304.13005) ### Other Known Limitations [Detailed in the paper](https://arxiv.org/abs/2304.13005) ### Dataset Curators Divyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use any of the datasets, models or code modules, please cite the following paper: ``` @misc{aggarwal2023evaluating, title={Evaluating Inter-Bilingual Semantic Parsing for Indian Languages}, author={Divyanshu Aggarwal and Vivek Gupta and Anoop Kunchukuttan}, year={2023}, eprint={2304.13005}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ### Contributions -->
heegyu/kowikitext
2022-10-02T05:07:59.000Z
[ "license:cc-by-sa-3.0", "region:us" ]
heegyu
한국어 위키피디아 article
@InProceedings{huggingface:dataset, title = {kowikitext}, author={Wikipedia}, year={2022} }
null
0
4
--- license: cc-by-sa-3.0 --- 한국어 위키피디아 article 덤프(20221001) - 1334694 rows - download size: 474MB ```python from datasets import load_dataset ds = load_dataset("heegyu/kowikitext", "20221001") ds["train"][0] ``` ``` {'id': '5', 'revid': '595831', 'url': 'https://ko.wikipedia.org/wiki?curid=5', 'title': '지미 카터', 'text': '제임스 얼 카터 주니어(, 1924년 10월 1일 ~ )는 민주당 출신 미국 39대 대통령 (1977년 ~ 1981년)이다.\n생애.\n어린 시절.\n지미 카터는 조지아주 섬터 카운티 플레인스 마을에서 태어났다.\n조지아 공과대학교를 졸업하였다. 그 후 해군에 들어가 전함·원자력·잠수함의 승무원으로 일하였다. 1953년 미국 해군 대위로 예편하였고 이후 땅콩·면화 등을 가꿔 많은 돈을 벌었다. 그의 별명이 "땅콩 농부" (Peanut Farmer)로 알려졌다.\n정계 입문.\n1962년 조지아주 상원 의원 선거에서 낙선하나 그 선거가 부정선거 였음을 ... " } ```
arbml/MPOLD
2022-11-03T13:14:22.000Z
[ "region:us" ]
arbml
null
null
null
0
4
Entry not found
arbml/SaudiIrony
2022-11-03T14:48:05.000Z
[ "region:us" ]
arbml
null
null
null
0
4
Entry not found
argilla/go_emotions_multi-label
2022-10-07T13:22:38.000Z
[ "region:us" ]
argilla
null
null
null
0
4
Entry not found
rdp-studio/paimon-voice
2022-10-10T02:58:45.000Z
[ "license:cc-by-nc-sa-4.0", "doi:10.57967/hf/0034", "region:us" ]
rdp-studio
null
null
null
0
4
--- license: cc-by-nc-sa-4.0 --- This dataset is uploading.
AndyChiang/cloth
2022-10-14T14:10:37.000Z
[ "task_categories:fill-mask", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:mit", "cloze", "mid-school", "high-school", "exams", "region:us" ]
AndyChiang
null
null
null
2
4
--- pretty_name: cloth multilinguality: - monolingual language: - en license: - mit size_categories: - 10K<n<100K tags: - cloze - mid-school - high-school - exams task_categories: - fill-mask --- # cloth **CLOTH** is a dataset which is a collection of nearly 100,000 cloze questions from middle school and high school English exams. The detail of CLOTH dataset is shown below. | Number of questions | Train | Valid | Test | | ------------------- | ----- | ----- | ----- | | **Middle school** | 22056 | 3273 | 3198 | | **High school** | 54794 | 7794 | 8318 | | **Total** | 76850 | 11067 | 11516 | Source: https://www.cs.cmu.edu/~glai1/data/cloth/
AndyChiang/dgen
2022-10-14T14:19:16.000Z
[ "task_categories:fill-mask", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:mit", "cloze", "sciq", "mcql", "ai2 science questions", "region:us" ]
AndyChiang
null
null
null
0
4
--- pretty_name: dgen multilinguality: - monolingual language: - en license: - mit size_categories: - 1K<n<10K tags: - cloze - sciq - mcql - ai2 science questions task_categories: - fill-mask --- # dgen **DGen** is a cloze questions dataset which covers multiple domains including science, vocabulary, common sense and trivia. It is compiled from a wide variety of datasets including SciQ, MCQL, AI2 Science Questions, etc. The detail of DGen dataset is shown below. | DGen dataset | Train | Valid | Test | Total | | ----------------------- | ----- | ----- | ---- | ----- | | **Number of questions** | 2321 | 300 | 259 | 2880 | Source: https://github.com/DRSY/DGen
arbml/Quran_Hadith
2022-10-14T17:45:37.000Z
[ "region:us" ]
arbml
null
null
null
1
4
--- dataset_info: features: - name: SS dtype: string - name: SV dtype: string - name: Verse1 dtype: string - name: TS dtype: string - name: TV dtype: string - name: Verse2 dtype: string - name: Label dtype: string splits: - name: train num_bytes: 7351452 num_examples: 8144 download_size: 2850963 dataset_size: 7351452 --- # Dataset Card for "Quran_Hadith" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arbml/arastance
2022-10-14T22:14:25.000Z
[ "region:us" ]
arbml
null
null
null
0
4
--- dataset_info: features: - name: filename dtype: string - name: claim dtype: string - name: claim_url dtype: string - name: article dtype: string - name: stance dtype: class_label: names: 0: Discuss 1: Disagree 2: Unrelated 3: Agree - name: article_title dtype: string - name: article_url dtype: string splits: - name: test num_bytes: 5611165 num_examples: 646 - name: train num_bytes: 29682402 num_examples: 2848 - name: validation num_bytes: 7080226 num_examples: 569 download_size: 18033579 dataset_size: 42373793 --- # Dataset Card for "arastance" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mohamedabdullah/Arabic-unique-words
2022-10-20T07:10:18.000Z
[ "region:us" ]
mohamedabdullah
null
null
null
1
4
Entry not found
nousr/laion-1m-vit-h-14
2022-10-23T00:43:25.000Z
[ "region:us" ]
nousr
null
null
null
0
4
Entry not found
darrow-ai/USClassActions
2022-12-09T12:18:13.000Z
[ "license:gpl-3.0", "arxiv:2211.00582", "region:us" ]
darrow-ai
null
null
null
0
4
--- license: gpl-3.0 --- ## Dataset Description - **Homepage:** https://www.darrow.ai/ - **Repository:** https://github.com/darrow-labs/ClassActionPrediction - **Paper:** https://arxiv.org/abs/2211.00582 - **Leaderboard:** N/A - **Point of Contact:** [Gila Hayat](mailto:gila@darrow.ai),[Gil Semo](mailto:gil.semo@darrow.ai) #### More Details & Collaborations Feel free to contact us in order to get a larger dataset. We would be happy to collaborate on future works. ### Dataset Summary USClassActions is an English dataset of 3K complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool. ### Data Instances ```python from datasets import load_dataset dataset = load_dataset('darrow-ai/USClassActions') ``` ### Data Fields `id`: (**int**) a unique identifier of the document \ `target_text`: (**str**) the complaint text \ `verdict`: (**str**) the outcome of the case \ ### Curation Rationale The dataset was curated by Darrow.ai (2022). ### Citation Information *Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus* *ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US* *Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022* ``` @InProceedings{Darrow-Niklaus-2022, author = {Semo, Gil and Bernsohn, Dor and Hagag, Ben and Hayat, Gila and Niklaus, Joel}, title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US}, booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop}, year = {2022}, location = {Abu Dhabi, EMNLP2022}, } ```
juliensimon/food102
2022-10-26T19:43:21.000Z
[ "region:us" ]
juliensimon
null
null
null
2
4
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: 0: apple_pie 1: baby_back_ribs 2: baklava 3: beef_carpaccio 4: beef_tartare 5: beet_salad 6: beignets 7: bibimbap 8: boeuf_bourguignon 9: bread_pudding 10: breakfast_burrito 11: bruschetta 12: caesar_salad 13: cannoli 14: caprese_salad 15: carrot_cake 16: ceviche 17: cheese_plate 18: cheesecake 19: chicken_curry 20: chicken_quesadilla 21: chicken_wings 22: chocolate_cake 23: chocolate_mousse 24: churros 25: clam_chowder 26: club_sandwich 27: crab_cakes 28: creme_brulee 29: croque_madame 30: cup_cakes 31: deviled_eggs 32: donuts 33: dumplings 34: edamame 35: eggs_benedict 36: escargots 37: falafel 38: filet_mignon 39: fish_and_chips 40: foie_gras 41: french_fries 42: french_onion_soup 43: french_toast 44: fried_calamari 45: fried_rice 46: frozen_yogurt 47: garlic_bread 48: gnocchi 49: greek_salad 50: grilled_cheese_sandwich 51: grilled_salmon 52: guacamole 53: gyoza 54: hamburger 55: hot_and_sour_soup 56: hot_dog 57: huevos_rancheros 58: hummus 59: ice_cream 60: lasagna 61: lobster_bisque 62: lobster_roll_sandwich 63: macaroni_and_cheese 64: macarons 65: miso_soup 66: mussels 67: nachos 68: omelette 69: onion_rings 70: oysters 71: pad_thai 72: paella 73: pancakes 74: panna_cotta 75: peking_duck 76: pho 77: pizza 78: pork_chop 79: poutine 80: prime_rib 81: pulled_pork_sandwich 82: ramen 83: ravioli 84: red_velvet_cake 85: risotto 86: samosa 87: sashimi 88: scallops 89: seaweed_salad 90: shrimp_and_grits 91: spaghetti_bolognese 92: spaghetti_carbonara 93: spring_rolls 94: steak 95: strawberry_shortcake 96: sushi 97: tacos 98: takoyaki 99: tiramisu 100: tuna_tartare 101: waffles splits: - name: test num_bytes: 1461368965.25 num_examples: 25500 - name: train num_bytes: 4285789478.25 num_examples: 76500 download_size: 5534173074 dataset_size: 5747158443.5 --- # Dataset Card for "food102" This is based on the [food101](https://huggingface.co/datasets/food101) dataset with an extra class generated with a Stable Diffusion model. A detailed walk-through is available on [YouTube](https://youtu.be/sIe0eo3fYQ4).
lmqg/qg_annotation
2022-10-30T15:08:30.000Z
[ "multilinguality:monolingual", "size_categories:<1K", "language:en", "license:cc-by-4.0", "arxiv:2210.03992", "region:us" ]
lmqg
Human-annotated question generated by models.
@inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", }
null
0
4
--- license: cc-by-4.0 pretty_name: QG Annotation language: en multilinguality: monolingual size_categories: <1K --- # Dataset Card for "lmqg/qg_annotation" ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is the annotated questions generated by different models, used to measure the correlation of automatic metrics against human in ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992). ### Languages English (en) ## Dataset Structure An example of 'train' looks as follows. ```python { "correctness": 1.8, "grammaticality": 3.0, "understandability": 2.4, "prediction": "What trade did the Ming dynasty have a shortage of?", "Bleu_4": 0.4961682999359617, "METEOR": 0.3572683356086923, "ROUGE_L": 0.7272727272727273, "BERTScore": 0.9142221808433532, "MoverScore": 0.6782580808848975, "reference_raw": "What important trade did the Ming Dynasty have with Tibet?", "answer_raw": "horse trade", "paragraph_raw": "Some scholars note that Tibetan leaders during the Ming frequently engaged in civil war and conducted their own foreign diplomacy with neighboring states such as Nepal. Some scholars underscore the commercial aspect of the Ming-Tibetan relationship, noting the Ming dynasty's shortage of horses for warfare and thus the importance of the horse trade with Tibet. Others argue that the significant religious nature of the relationship of the Ming court with Tibetan lamas is underrepresented in modern scholarship. In hopes of reviving the unique relationship of the earlier Mongol leader Kublai Khan (r. 1260\u20131294) and his spiritual superior Drog\u00f6n Ch\u00f6gyal Phagpa (1235\u20131280) of the Sakya school of Tibetan Buddhism, the Yongle Emperor (r. 1402\u20131424) made a concerted effort to build a secular and religious alliance with Deshin Shekpa (1384\u20131415), the Karmapa of the Karma Kagyu school. However, the Yongle Emperor's attempts were unsuccessful.", "sentence_raw": "Some scholars underscore the commercial aspect of the Ming-Tibetan relationship, noting the Ming dynasty's shortage of horses for warfare and thus the importance of the horse trade with Tibet.", "reference_norm": "what important trade did the ming dynasty have with tibet ?", "model": "T5 Large" } ``` ## Citation Information ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
sileod/probability_words_nli
2023-09-06T14:56:43.000Z
[ "task_categories:text-classification", "task_categories:multiple-choice", "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:multiple-choice-qa", "task_ids:natural-language-inference", "task_ids:multi-input-text-classification", "annotations_creators:expert-generated", "langu...
sileod
Probing neural language models for understanding of words of estimative probability
@inproceedings{sileo-moens-2023-probing, title = "Probing neural language models for understanding of words of estimative probability", author = "Sileo, Damien and Moens, Marie-francine", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.41", doi = "10.18653/v1/2023.starsem-1.41", pages = "469--476", }
null
3
4
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: 'probability_words_nli' paperwithcoode_id: probability-words-nli size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification - multiple-choice - question-answering task_ids: - open-domain-qa - multiple-choice-qa - natural-language-inference - multi-input-text-classification tags: - wep - words of estimative probability - probability - logical reasoning - soft logic - nli - verbal probabilities - natural-language-inference - reasoning - logic train-eval-index: - config: usnli task: text-classification task_id: multi-class-classification splits: train_split: train eval_split: validation col_mapping: sentence1: context sentence2: hypothesis label: label metrics: - type: accuracy name: Accuracy - type: f1 name: F1 binary - config: reasoning-1hop task: text-classification task_id: multi-class-classification splits: train_split: train eval_split: validation col_mapping: sentence1: context sentence2: hypothesis label: label metrics: - type: accuracy name: Accuracy - type: f1 name: F1 binary - config: reasoning-2hop task: text-classification task_id: multi-class-classification splits: train_split: train eval_split: validation col_mapping: sentence1: context sentence2: hypothesis label: label metrics: - type: accuracy name: Accuracy - type: f1 name: F1 binary --- # Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like "probably", "maybe", "surely", "impossible". We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities according to [Fagen-Ulmschneider, 2018](https://github.com/wadefagen/datasets/tree/master/Perception-of-Probability-Words). The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis). Code : [colab](https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing) # Citation https://arxiv.org/abs/2211.03358 ```bib @inproceedings{sileo-moens-2023-probing, title = "Probing neural language models for understanding of words of estimative probability", author = "Sileo, Damien and Moens, Marie-francine", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.41", doi = "10.18653/v1/2023.starsem-1.41", pages = "469--476", } ```
fkdosilovic/docee-event-classification
2022-11-03T21:39:31.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "wiki", "news", "event-detection", "region:us" ]
fkdosilovic
null
null
null
0
4
--- language: - en license: - mit multilinguality: - monolingual pretty_name: DocEE size_categories: - 10K<n<100K source_datasets: - original tags: - wiki - news - event-detection task_categories: - text-classification task_ids: - multi-class-classification --- # Dataset Card for DocEE Dataset ## Dataset Description - **Homepage:** - **Repository:** [DocEE Dataset repository](https://github.com/tongmeihan1995/docee) - **Paper:** [DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction](https://aclanthology.org/2022.naacl-main.291/) ### Dataset Summary DocEE dataset is an English-language dataset containing more than 27k news and Wikipedia articles. Dataset is primarily annotated and collected for large-scale document-level event extraction. ### Data Fields - `title`: TODO - `text`: TODO - `event_type`: TODO - `date`: TODO - `metadata`: TODO **Note: this repo contains only event detection portion of the dataset.** ### Data Splits The dataset has 2 splits: _train_ and _test_. Train split contains 21949 documents while test split contains 5536 documents. In total, dataset contains 27485 documents classified into 59 event types. #### Differences from the original split(s) Originally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types. Originally, the `title` column additionally contained information from `date` and `metadata` columns. They are now separated into three columns: `date`, `metadata` and `title`.
pixta-ai/e-commerce-apparel-dataset-for-ai-ml
2023-02-22T14:21:46.000Z
[ "license:other", "region:us" ]
pixta-ai
null
null
null
2
4
--- license: other --- # 1. Overview This dataset is a collection of 5,000+ images of clothing & apparels set that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects. # 2. Use case The e-commerce apparel dataset could be used for various AI & Computer Vision models: Product Visual Search, Similar Product Recommendation, Product Catalog,... Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets. # 3. About PIXTA PIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at https://www.pixta.ai/ or contact via our email contact@pixta.ai."
armanc/ScienceQA
2022-11-11T08:34:35.000Z
[ "region:us" ]
armanc
null
null
null
2
4
This is the ScientificQA dataset by Saikh et al (2022). ``` @article{10.1007/s00799-022-00329-y, author = {Saikh, Tanik and Ghosal, Tirthankar and Mittal, Amish and Ekbal, Asif and Bhattacharyya, Pushpak}, title = {ScienceQA: A Novel Resource for Question Answering on Scholarly Articles}, year = {2022}, journal = {Int. J. Digit. Libr.}, month = {sep} }
bigbio/evidence_inference
2022-12-22T15:44:37.000Z
[ "multilinguality:monolingual", "language:en", "license:mit", "region:us" ]
bigbio
The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple treatments. Each of these articles will have multiple questions, or 'prompts' associated with them. These prompts will ask about the relationship between an intervention and comparator with respect to an outcome, as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared to placebo on the duration of headaches. For the sake of this task, we assume that a particular article will report that the intervention of interest either significantly increased, significantly decreased or had significant effect on the outcome, relative to the comparator.
@inproceedings{deyoung-etal-2020-evidence, title = "Evidence Inference 2.0: More Data, Better Models", author = "DeYoung, Jay and Lehman, Eric and Nye, Benjamin and Marshall, Iain and Wallace, Byron C.", booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.bionlp-1.13", pages = "123--132", }
null
1
4
--- language: - en bigbio_language: - English license: mit multilinguality: monolingual bigbio_license_shortname: MIT pretty_name: Evidence Inference 2.0 homepage: https://github.com/jayded/evidence-inference bigbio_pubmed: True bigbio_public: True bigbio_tasks: - QUESTION_ANSWERING --- # Dataset Card for Evidence Inference 2.0 ## Dataset Description - **Homepage:** https://github.com/jayded/evidence-inference - **Pubmed:** True - **Public:** True - **Tasks:** QA The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple treatments. Each of these articles will have multiple questions, or 'prompts' associated with them. These prompts will ask about the relationship between an intervention and comparator with respect to an outcome, as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared to placebo on the duration of headaches. For the sake of this task, we assume that a particular article will report that the intervention of interest either significantly increased, significantly decreased or had significant effect on the outcome, relative to the comparator. ## Citation Information ``` @inproceedings{deyoung-etal-2020-evidence, title = "Evidence Inference 2.0: More Data, Better Models", author = "DeYoung, Jay and Lehman, Eric and Nye, Benjamin and Marshall, Iain and Wallace, Byron C.", booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.bionlp-1.13", pages = "123--132", } ```
bigbio/pubhealth
2022-12-22T15:46:21.000Z
[ "multilinguality:monolingual", "language:en", "license:mit", "region:us" ]
bigbio
A dataset of 11,832 claims for fact- checking, which are related a range of health topics including biomedical subjects (e.g., infectious diseases, stem cell research), government healthcare policy (e.g., abortion, mental health, women’s health), and other public health-related stories
@article{kotonya2020explainable, title={Explainable automated fact-checking for public health claims}, author={Kotonya, Neema and Toni, Francesca}, journal={arXiv preprint arXiv:2010.09926}, year={2020} }
null
0
4
--- language: - en bigbio_language: - English license: mit multilinguality: monolingual bigbio_license_shortname: MIT pretty_name: PUBHEALTH homepage: https://github.com/neemakot/Health-Fact-Checking/tree/master/data bigbio_pubmed: False bigbio_public: True bigbio_tasks: - TEXT_CLASSIFICATION --- # Dataset Card for PUBHEALTH ## Dataset Description - **Homepage:** https://github.com/neemakot/Health-Fact-Checking/tree/master/data - **Pubmed:** False - **Public:** True - **Tasks:** TXTCLASS A dataset of 11,832 claims for fact- checking, which are related a range of health topics including biomedical subjects (e.g., infectious diseases, stem cell research), government healthcare policy (e.g., abortion, mental health, women’s health), and other public health-related stories ## Citation Information ``` @article{kotonya2020explainable, title={Explainable automated fact-checking for public health claims}, author={Kotonya, Neema and Toni, Francesca}, journal={arXiv preprint arXiv:2010.09926}, year={2020} } ```
bigbio/spl_adr_200db
2022-12-22T15:46:56.000Z
[ "multilinguality:monolingual", "language:en", "license:cc0-1.0", "region:us" ]
bigbio
The United States Food and Drug Administration (FDA) partnered with the National Library of Medicine to create a pilot dataset containing standardised information about known adverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs), the documents FDA uses to exchange information about drugs and other products, were manually annotated for adverse reactions at the mention level to facilitate development and evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were then normalised to the Unified Medical Language System (UMLS) and to the Medical Dictionary for Regulatory Activities (MedDRA).
@article{demner2018dataset, author = {Demner-Fushman, Dina and Shooshan, Sonya and Rodriguez, Laritza and Aronson, Alan and Lang, Francois and Rogers, Willie and Roberts, Kirk and Tonning, Joseph}, title = {A dataset of 200 structured product labels annotated for adverse drug reactions}, journal = {Scientific Data}, volume = {5}, year = {2018}, month = {01}, pages = {180001}, url = { https://www.researchgate.net/publication/322810855_A_dataset_of_200_structured_product_labels_annotated_for_adverse_drug_reactions }, doi = {10.1038/sdata.2018.1} }
null
2
4
--- language: - en bigbio_language: - English license: cc0-1.0 multilinguality: monolingual bigbio_license_shortname: CC0_1p0 pretty_name: SPL ADR homepage: https://bionlp.nlm.nih.gov/tac2017adversereactions/ bigbio_pubmed: False bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION - NAMED_ENTITY_DISAMBIGUATION - RELATION_EXTRACTION --- # Dataset Card for SPL ADR ## Dataset Description - **Homepage:** https://bionlp.nlm.nih.gov/tac2017adversereactions/ - **Pubmed:** False - **Public:** True - **Tasks:** NER,NED,RE The United States Food and Drug Administration (FDA) partnered with the National Library of Medicine to create a pilot dataset containing standardised information about known adverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs), the documents FDA uses to exchange information about drugs and other products, were manually annotated for adverse reactions at the mention level to facilitate development and evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were then normalised to the Unified Medical Language System (UMLS) and to the Medical Dictionary for Regulatory Activities (MedDRA). ## Citation Information ``` @article{demner2018dataset, author = {Demner-Fushman, Dina and Shooshan, Sonya and Rodriguez, Laritza and Aronson, Alan and Lang, Francois and Rogers, Willie and Roberts, Kirk and Tonning, Joseph}, title = {A dataset of 200 structured product labels annotated for adverse drug reactions}, journal = {Scientific Data}, volume = {5}, year = {2018}, month = {01}, pages = {180001}, url = { https://www.researchgate.net/publication/322810855_A_dataset_of_200_structured_product_labels_annotated_for_adverse_drug_reactions }, doi = {10.1038/sdata.2018.1} } ```
Murple/mmcrsc
2022-11-14T02:37:54.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:zh", "license:cc-by-nc-nd-4.0", "region:us" ]
Murple
The corpus by Magic Data Technology Co., Ltd. , containing 755 hours of scripted read speech data from 1080 native speakers of the Mandarin Chinese spoken in mainland China. The sentence transcription accuracy is higher than 98%.
@misc{magicdata_2019, title={MAGICDATA Mandarin Chinese Read Speech Corpus}, url={https://openslr.org/68/}, publisher={Magic Data Technology Co., Ltd.}, year={2019}, month={May}}
null
2
4
--- annotations_creators: - expert-generated language: - zh language_creators: - crowdsourced license: - cc-by-nc-nd-4.0 multilinguality: - monolingual pretty_name: MAGICDATA_Mandarin_Chinese_Read_Speech_Corpus size_categories: - 10K<n<100K source_datasets: - original tags: [] task_categories: - automatic-speech-recognition task_ids: [] --- # Dataset Card for MMCRSC ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [MAGICDATA Mandarin Chinese Read Speech Corpus](https://openslr.org/68/) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary MAGICDATA Mandarin Chinese Read Speech Corpus was developed by MAGIC DATA Technology Co., Ltd. and freely published for non-commercial use. The contents and the corresponding descriptions of the corpus include: The corpus contains 755 hours of speech data, which is mostly mobile recorded data. 1080 speakers from different accent areas in China are invited to participate in the recording. The sentence transcription accuracy is higher than 98%. Recordings are conducted in a quiet indoor environment. The database is divided into training set, validation set, and testing set in a ratio of 51: 1: 2. Detail information such as speech data coding and speaker information is preserved in the metadata file. The domain of recording texts is diversified, including interactive Q&A, music search, SNS messages, home command and control, etc. Segmented transcripts are also provided. The corpus aims to support researchers in speech recognition, machine translation, speaker recognition, and other speech-related fields. Therefore, the corpus is totally free for academic use. The corpus is a subset of a much bigger data ( 10566.9 hours Chinese Mandarin Speech Corpus ) set which was recorded in the same environment. Please feel free to contact us via business@magicdatatech.com for more details. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages zh-CN ## Dataset Structure ### Data Instances ```json { 'file': '14_3466_20170826171404.wav', 'audio': { 'path': '14_3466_20170826171404.wav', 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 16000 }, 'text': '请搜索我附近的超市', 'speaker_id': 143466, 'id': '14_3466_20170826171404.wav' } ``` ### Data Fields - file: A path to the downloaded audio file in .wav format. - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - text: the transcription of the audio file. - id: unique id of the data sample. - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information Please cite the corpus as "Magic Data Technology Co., Ltd., "http://www.imagicdatatech.com/index.php/home/dataopensource/data_info/id/101", 05/2019".
SerhiiBond/automotive_churn_prediction
2022-11-15T20:06:04.000Z
[ "region:us" ]
SerhiiBond
null
null
null
0
4
Entry not found
sagnikrayc/snli-cf-kaushik
2022-11-21T22:34:23.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:multi-input-text-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|snli", "language:en", ...
sagnikrayc
The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE). In the ICLR 2020 paper [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://openreview.net/forum?id=Sklgs0NFvr), Kaushik et. al. provided a dataset with counterfactual perturbations on the SNLI and IMDB data. This repository contains the original and counterfactual perturbations for the SNLI data, which was generated after processing the original data from [here](https://github.com/acmi-lab/counterfactually-augmented-data).
@inproceedings{DBLP:conf/iclr/KaushikHL20, author = {Divyansh Kaushik and Eduard H. Hovy and Zachary Chase Lipton}, title = {Learning The Difference That Makes {A} Difference With Counterfactually-Augmented Data}, booktitle = {8th International Conference on Learning Representations, {ICLR} 2020, Addis Ababa, Ethiopia, April 26-30, 2020}, publisher = {OpenReview.net}, year = {2020}, url = {https://openreview.net/forum?id=Sklgs0NFvr}, timestamp = {Thu, 07 May 2020 17:11:48 +0200}, biburl = {https://dblp.org/rec/conf/iclr/KaushikHL20.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
null
0
4
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|snli task_categories: - text-classification task_ids: - natural-language-inference - multi-input-text-classification pretty_name: Counterfactual Instances for Stanford Natural Language Inference dataset_info: features: - name: premise dtype: string - name: hypothesis dtype: string - name: label dtype: string splits: - name: train num_bytes: 1771712 num_examples: 8300 - name: validation num_bytes: 217479 num_examples: 1000 - name: test num_bytes: 437468 num_examples: 2000 --- # Dataset Card for Counterfactually Augmented SNLI ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Description - **Repository:** [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://github.com/acmi-lab/counterfactually-augmented-data) - **Paper:** [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://openreview.net/forum?id=Sklgs0NFvr) - **Point of Contact:** [Sagnik Ray Choudhury](mailto:sagnikrayc@gmail.com) ### Dataset Summary The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE). In the ICLR 2020 paper [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://openreview.net/forum?id=Sklgs0NFvr), Kaushik et. al. provided a dataset with counterfactual perturbations on the SNLI and IMDB data. This repository contains the original and counterfactual perturbations for the SNLI data, which was generated after processing the original data from [here](https://github.com/acmi-lab/counterfactually-augmented-data). ### Languages The language in the dataset is English as spoken by users of the website Flickr and as spoken by crowdworkers from Amazon Mechanical Turk. The BCP-47 code for English is en. ## Dataset Structure ### Data Instances For each instance, there is: - a string for the premise, - a string for the hypothesis, - a label: (entailment, contradiction, neutral) - a type: this tells whether the data point is the original SNLI data point or a counterfactual perturbation. - an idx. The ids correspond to the original id in the SNLI data. For example, if the original SNLI instance was `4626192243.jpg#3r1e`, there wil be 5 data points as follows: ```json lines { "idx": "4626192243.jpg#3r1e-orig", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is lying down on the street.", "hypothesis": "A man is prone on the street while another man stands next to him.", "label": "entailment", "type": "original" } { "idx": "4626192243.jpg#3r1e-cf-0", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is lying down on the street.", "hypothesis": "A man is talking to his wife on the cellphone.", "label": "neutral", "type": "cf" } { "idx": "4626192243.jpg#3r1e-cf-1", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is on the street.", "hypothesis": "A man is prone on the street while another man stands next to him.", "label": "neutral", "type": "cf" } { "idx": "4626192243.jpg#3r1e-cf-2", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is sitting on the street.", "hypothesis": "A man is prone on the street while another man stands next to him.", "label": "contradiction", "_type": "cf" } { "idx": "4626192243.jpg#3r1e-cf-3", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is lying down on the street.", "hypothesis": "A man is alone on the street.", "label": "contradiction", "type": "cf" } ``` ### Data Splits Following SNLI, this dataset also has 3 splits: _train_, _validation_, and _test_. The original paper says this: ```aidl RP and RH, each comprised of 3332 pairs in train, 400 in validation, and 800 in test, leading to a total of 6664 pairs in train, 800 in validation, and 1600 in test in the revised dataset. ``` This means for _train_, there are 1666 original SNLI instances, and each has 4 counterfactual perturbations (from premise and hypothesis edit), leading to a total of 1666*5 = 8330 _train_ data points in this dataset. Similarly, _validation_ and _test_ has 200 and 400 original SNLI instances respectively, consequently 1000 and 2000 instances in total. | Dataset Split | Number of Instances in Split | |---------------|------------------------------| | Train | 8,330 | | Validation | 1,000 | | Test | 2,000 |
declare-lab/HyperRED
2022-11-23T10:55:14.000Z
[ "license:cc-by-sa-3.0", "arxiv:2211.10018", "region:us" ]
declare-lab
null
null
null
2
4
--- license: cc-by-sa-3.0 --- # Dataset Card for HyperRED ## Description - **Repository:** https://github.com/declare-lab/HyperRED - **Paper (EMNLP 2022):** https://arxiv.org/abs/2211.10018 ### Summary HyperRED is a dataset for the new task of hyper-relational extraction, which extracts relation triplets together with qualifier information such as time, quantity or location. For example, the relation triplet (Leonard Parker, Educated At, Harvard University) can be factually enriched by including the qualifier (End Time, 1967). HyperRED contains 44k sentences with 62 relation types and 44 qualifier types. ### Languages English. ## Dataset Structure ### Data Fields - **tokens:** Sentence text tokens. - **entities:** List of each entity span. The span indices correspond to each token in the space-separated text (inclusive-start and exclusive-end index) - **relations:** List of each relationship label between the head and tail entity spans. Each relation contains a list of qualifiers where each qualifier has the value entity span and qualifier label. ### Data Instances An example instance of the dataset is shown below: ``` { "tokens": ['Acadia', 'University', 'is', 'a', 'predominantly', 'undergraduate', 'university', 'located', 'in', 'Wolfville', ',', 'Nova', 'Scotia', ',', 'Canada', 'with', 'some', 'graduate', 'programs', 'at', 'the', 'master', "'", 's', 'level', 'and', 'one', 'at', 'the', 'doctoral', 'level', '.'], "entities": [ {'span': (0, 2), 'label': 'Entity'}, {'span': (9, 13), 'label': 'Entity'}, {'span': (14, 15), 'label': 'Entity'}, ], "relations": [ { "head": [0, 2], "tail": [9, 13], "label": "headquarters location", "qualifiers": [ {"span": [14, 15], "label": "country"} ] } ], } ``` ### Data Splits The dataset contains 39,840 instances for training, 1,000 instances for validation and 4,000 instances for testing. ### Dataset Creation The dataset is constructed from distant supervision between Wikipedia and Wikidata, and the human annotation process is detailed in the paper. ## Citation Information ``` @inproceedings{chia2022hyperred, title={A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach}, author={Yew Ken Chia, Lidong Bing, Sharifah Mahani Aljunied, Luo Si and Soujanya Poria}, booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing}, year={2022} } ```
DTU54DL/demo-common-whisper
2022-11-22T08:43:39.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
DTU54DL
null
null
null
0
4
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - mit multilinguality: - monolingual paperswithcode_id: acronym-identification pretty_name: Acronym Identification Dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - token-classification-other-acronym-identification train-eval-index: - col_mapping: labels: tags tokens: tokens config: default splits: eval_split: test task: token-classification task_id: entity_extraction --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
VishwanathanR/flowers-dataset
2022-11-23T14:08:09.000Z
[ "region:us" ]
VishwanathanR
null
null
null
0
4
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 347100141.78 num_examples: 8189 download_size: 346573740 dataset_size: 347100141.78 --- # Dataset Card for "flowers-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cjvt/slo_collocations
2022-11-27T11:09:16.000Z
[ "task_categories:other", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "language:sl", "license:cc-by-sa-4.0", "kolokacije", "gigafida"...
cjvt
The database of the Collocations Dictionary of Modern Slovene 1.0 contains collocations that were automatically extracted from the Gigafida 1.0 corpus and then postprocessed.
@inproceedings{kosem2018collocations, title={Collocations dictionary of modern Slovene}, author={Kosem, Iztok and Krek, Simon and Gantar, Polona and Arhar Holdt, {\v{S}}pela and {\v{C}}ibej, Jaka and Laskowski, Cyprian}, booktitle={Proceedings of the XVIII EURALEX International Congress: Lexicography in Global Contexts}, pages={989--997}, year={2018}, organization={Znanstvena zalo{\v{z}}ba Filozofske fakultete Univerze v Ljubljani} }
null
0
4
--- annotations_creators: - expert-generated - machine-generated language: - sl language_creators: - found - machine-generated license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: Collocations Dictionary of Modern Slovene 1.0 size_categories: - 1M<n<10M source_datasets: [] tags: - kolokacije - gigafida task_categories: - other task_ids: [] --- # Dataset Card for Collocations Dictionary of Modern Slovene KSSS 1.0 Also known as "Kolokacije 1.0". Available in application form online: https://viri.cjvt.si/kolokacije/eng/. ### Dataset Summary The database of the Collocations Dictionary of Modern Slovene 1.0 contains entries for 35,862 headwords (18,043 nouns, 5,148 verbs, 10,259 adjectives and 2,412 adverbs) and 7,310,983 collocations that were automatically extracted from the Gigafida 1.0 corpus. For the automatic extraction via the Sketch Engine API the authors used a specially adapted Sketch grammar for Slovene, and, based on manual evaluation, a set of parameters that determined: maximum number of collocates per grammatical relation, minimum frequency of a collocate, minimum frequency of a grammatical relation, minimum salience (logDice) score of a collocate, and minimum salience of a grammatical relation. The procedure of automatic extraction, which produced a list of collocates (lemmas) in a particular relation, was followed by a set of post-processing steps: - removal of collocations that were represented by repetitions of the same sentence - preparation of full collocations by the addition of the headword, and, if needed, the third element in the grammatical relation (such as preposition). The headwords/collocates were also put in the correct case, depending on the grammatical relation. - addition of IDs from the Slovenian morphological lexicon [Sloleks](http://hdl.handle.net/11356/1230) to every element in the collocation. For a detailed description of the data, please see the paper Kosem et al. (2018). ### Supported Tasks and Leaderboards Other (the data is a knowledge base). ### Languages Slovenian. ## Dataset Structure ### Data Instances The structure of the original data is flattened, meaning that each collocation is its own instance. The following example shows the entry for collocation `"idealizirati preteklost"` (*to idealize the past*), which is a collocation of the lexical unit `"idealizirati"` (*to idealize*). ``` { 'collocation': 'idealizirati preteklost', 'cluster': 1, 'words': ['idealizirati', 'preteklost'], 'sloleks_ids': ['LE_08e2de61d9f23f949a21f37639afdff2', 'LE_92b3b802fe9baeff25bdd6deafde10ca'], 'gramrel': 'GBZ sbz4', 'sense': 0, 'id_lex_unit': '1372', 'lex_unit': 'idealizirati', 'lex_unit_category': 'verb' } ``` ### Data Fields - `collocation`: the string form of the collocation; - `cluster`: cluster of the collocation - sometimes, but not always, corresponds to the sense; - `words`: tokenized collocation; - `sloleks_ids`: [Sloleks](http://hdl.handle.net/11356/1230) IDs of collocation words; - `gramrel`: grammatical relation; - `sense`: sense of the collocation - curently constant (see `cluster` for a slightly better approximate division); - `id_lex_unit`: ID of the lexical unit that the collocation belongs to; - `lex_unit`: lexical unit; - `lex_unit_category`: category of the lexical unit. ## Additional Information ### Dataset Curators Iztok Kosem; et al. (please see http://hdl.handle.net/11356/1250 for the full list). ### Licensing Information CC BY-SA 4.0 ### Citation Information ``` @inproceedings{kosem2018collocations, title={Collocations dictionary of modern Slovene}, author={Kosem, Iztok and Krek, Simon and Gantar, Polona and Arhar Holdt, {\v{S}}pela and {\v{C}}ibej, Jaka and Laskowski, Cyprian}, booktitle={Proceedings of the XVIII EURALEX International Congress: Lexicography in Global Contexts}, pages={989--997}, year={2018}, organization={Znanstvena zalo{\v{z}}ba Filozofske fakultete Univerze v Ljubljani} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
DTU54DL/common-proc-whisper
2022-11-26T23:32:29.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
DTU54DL
null
null
null
0
4
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - mit multilinguality: - monolingual paperswithcode_id: acronym-identification pretty_name: Acronym Identification Dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - token-classification-other-acronym-identification train-eval-index: - col_mapping: labels: tags tokens: tokens config: default splits: eval_split: test task: token-classification task_id: entity_extraction --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
ashraf-ali/quran-data
2022-12-10T17:35:33.000Z
[ "task_categories:automatic-speech-recognition", "language_creators:Tarteel.io", "license:cc0-1.0", "region:us" ]
ashraf-ali
null
null
null
5
4
--- language_creators: - Tarteel.io license: - cc0-1.0 size_categories: ar: - 43652 task_categories: - automatic-speech-recognition task_ids: [] paperswithcode_id: quran-data pretty_name: Quran Audio language_bcp47: - ar --- # Dataset Card for Quran audio Content * 7 Imam Full Quran Recitation: 7*6236 wav file - csv contains the Text info for 11k subset short wav file * Tarteel.io user dataset ~25k wav - csv contains the Text info for 18k subset of the accepted user quality
DTU54DL/common-native-proc
2022-11-30T20:46:05.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
DTU54DL
null
null
null
0
4
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - mit multilinguality: - monolingual paperswithcode_id: acronym-identification pretty_name: Acronym Identification Dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - token-classification-other-acronym-identification train-eval-index: - col_mapping: labels: tags tokens: tokens config: default splits: eval_split: test task: token-classification task_id: entity_extraction dataset_info: features: - name: sentence dtype: string - name: accent dtype: string - name: input_features sequence: sequence: float32 - name: labels sequence: int64 splits: - name: train num_bytes: 9605830041 num_examples: 10000 - name: test num_bytes: 954798551 num_examples: 994 download_size: 2010871786 dataset_size: 10560628592 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
DTU54DL/common-accent-augmented
2022-12-07T14:00:54.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
DTU54DL
null
null
null
0
4
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - mit multilinguality: - monolingual paperswithcode_id: acronym-identification pretty_name: Acronym Identification Dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - token-classification-other-acronym-identification train-eval-index: - col_mapping: labels: tags tokens: tokens config: default splits: eval_split: test task: token-classification task_id: entity_extraction dataset_info: features: - name: sentence dtype: string - name: accent dtype: string - name: input_features sequence: sequence: float32 - name: labels sequence: int64 splits: - name: test num_bytes: 433226048 num_examples: 451 - name: train num_bytes: 9606026408 num_examples: 10000 download_size: 2307300737 dataset_size: 10039252456 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
erkanxyzalaca/turkishKuran
2022-12-02T14:01:58.000Z
[ "region:us" ]
erkanxyzalaca
null
null
null
0
4
--- dataset_info: features: - name: Ayet dtype: string - name: review_length dtype: int64 splits: - name: train num_bytes: 255726.9 num_examples: 738 - name: validation num_bytes: 28414.1 num_examples: 82 download_size: 0 dataset_size: 284141.0 --- # Dataset Card for "turkishKuran" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DTU54DL/common-accent-augmented-proc
2022-12-03T12:56:02.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
DTU54DL
null
null
null
0
4
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - mit multilinguality: - monolingual paperswithcode_id: acronym-identification pretty_name: Acronym Identification Dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - token-classification-other-acronym-identification train-eval-index: - col_mapping: labels: tags tokens: tokens config: default splits: eval_split: test task: token-classification task_id: entity_extraction dataset_info: features: - name: sentence dtype: string - name: accent dtype: string - name: input_features sequence: sequence: float32 - name: labels sequence: int64 splits: - name: test num_bytes: 433226048 num_examples: 451 - name: train num_bytes: 9606026408 num_examples: 10000 download_size: 2307292790 dataset_size: 10039252456 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
nbtpj/multi-context-long-answer-dataset
2022-12-05T02:44:15.000Z
[ "region:us" ]
nbtpj
null
null
null
4
4
Entry not found
MFreidank/glenda
2022-12-29T12:19:47.000Z
[ "task_categories:image-classification", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-nc-4.0", "region:us" ]
MFreidank
null
null
null
1
4
--- pretty_name: GLENDA - The ITEC Gynecologic Laparoscopy Endometriosis Dataset language: - en annotations_creators: - expert-generated language_creators: - machine-generated license: - cc-by-nc-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K task_categories: - image-classification task_ids: [] dataset_info: - config_name: binary_classification features: - name: image dtype: image - name: metadata struct: - name: id dtype: int32 - name: width dtype: int32 - name: height dtype: int32 - name: file_name dtype: string - name: path dtype: string - name: fickr_url dtype: string - name: coco_url dtype: string - name: date_captured dtype: string - name: case_id dtype: int32 - name: video_id dtype: int32 - name: frame_id dtype: int32 - name: from_seconds dtype: int32 - name: to_seconds dtype: int32 - name: labels dtype: class_label: names: '0': no_pathology '1': endometriosis splits: - name: train num_bytes: 4524957 num_examples: 13811 download_size: 895554144 dataset_size: 4524957 - config_name: multiclass_classification features: - name: image dtype: image - name: metadata struct: - name: id dtype: int32 - name: width dtype: int32 - name: height dtype: int32 - name: file_name dtype: string - name: path dtype: string - name: fickr_url dtype: string - name: coco_url dtype: string - name: date_captured dtype: string - name: case_id dtype: int32 - name: video_id dtype: int32 - name: frame_id dtype: int32 - name: from_seconds dtype: int32 - name: to_seconds dtype: int32 - name: labels dtype: class_label: names: '0': No-Pathology '1': 6.1.1.1_Endo-Peritoneum '2': 6.1.1.2_Endo-Ovar '3': 6.1.1.3_Endo-TIE '4': 6.1.1.4_Endo-Uterus splits: - name: train num_bytes: 4524957 num_examples: 13811 download_size: 895554144 dataset_size: 4524957 --- # Dataset Card for GLENDA - The ITEC Gynecologic Laparoscopy Endometriosis Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://ftp.itec.aau.at/datasets/GLENDA/index.html - **Repository:** - **Paper:** [GLENDA: Gynecologic Laparoscopy Endometriosis Dataset](https://link.springer.com/chapter/10.1007/978-3-030-37734-2_36) - **Leaderboard:** - **Point of Contact:** freidankm@gmail.com ### Dataset Summary GLENDA (Gynecologic Laparoscopy ENdometriosis DAtaset) comprises over 350 annotated endometriosis lesion images taken from 100+ gynecologic laparoscopy surgeries as well as over 13K unannotated non pathological images of 20+ surgeries. The dataset is purposefully created to be utilized for a variety of automatic content analysis problems in the context of Endometriosis recognition. **Usage Information (Disclaimer)** The dataset is exclusively provided for scientific research purposes and as such cannot be used commercially or for any other purpose. If any other purpose is intended, you may directly contact the originator of the videos. For additional information (including contact details), please visit [the official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html). **Description** Endometriosis is a benign but potentially painful condition among women in child bearing age involving the growth of uterine-like tissue in locations outside of the uterus. Corresponding lesions can be found in various positions and severities, often in multiple instances per patient requiring a physician to determine its extent. This most frequently is accomplished by calculating its magnitude via utilizing the combination of two popular classification systems, the revised American Society for Reproductive Medicine (rASRM) and the European Enzian scores. Endometriosis can not reliably identified by laymen, therefore, the dataset has been created with the help of medical experts in the field of endometriosis treatment. **Purposes** * binary (endometriosis) classification * detection/localization **Overview** The dataset includes region-based annotations of 4 pathological endometriosis categories as well as non pathological counter example images. Annotations are created for single video frames that may be part of larger sequences comprising several consecutive frames (all showing the annotated condition). Frames can contain multiple annotations, potentially of different categories. Each single annotation is exported as a binary image (similar to below examples, albeit one image per annotation). # TODO: FIXME: A bit more useful info on dataset case distribution class distribution and link to original + preview link TODO: FIXME: A bit more useful info on dataset case distribution class distribution and link to original + preview link ### Supported Tasks and Leaderboards - `image_classification`: The dataset can be used for binary (no pathology / endometriosis) or multiclass image classification (No-Pathology, 6.1.1.1\_Endo-Peritoneum, 6.1.1.2\_Endo-Ovar, 6.1.1.3\_Endo-DIE, 6.1.1.4\_Endo-Uterus. These classes respectively correspond to: no visible pathology in relation to endometriosis, peritoneal endometriosis, endometriosis on ovaries, deep infiltrating endometriosis (DIE) and uterine endometriosis.). ## Dataset Structure ### Data Instances #### binary\_classification TODO DESCRIBE #### multiclass\_classification TODO DESCRIBE ## Dataset Creation ### Curation Rationale From the [official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html) > The dataset is purposefully created to be utilized for a variety of automatic content analysis problems in the context of Endometriosis recognition ### Source Data #### Initial Data Collection and Normalization From the [official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html) > The dataset includes region-based annotations of 4 pathological endometriosis categories as well as non pathological counter example images. Annotations are created for single video frames that may be part of larger sequences comprising several consecutive frames (all showing the annotated condition). Frames can contain multiple annotations, potentially of different categories. Each single annotation is exported as a binary image (similar to below examples, albeit one image per annotation). ### Annotations #### Annotation process From the [official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html) > Corresponding lesions can be found in various positions and severities, often in multiple instances per patient requiring a physician to determine its extent. This most frequently is accomplished by calculating its magnitude via utilizing the combination of two popular classification systems, the revised American Society for Reproductive Medicine (rASRM) and the European Enzian scores. Endometriosis can not reliably identified by laymen, therefore, the dataset has been created with the help of medical experts in the field of endometriosis treatment. #### Who are the annotators? Medical experts in the field of endometriosis treatment. ### Personal and Sensitive Information [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is exclusively provided for scientific research purposes and as such cannot be used commercially or for any other purpose. If any other purpose is intended, you may directly contact the originator of the videos, Prof. Dr. Jörg Keckstein. GLENDA is licensed under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0, Creative Commons License) and is created as well as maintained by Distributed Multimedia Systems Group of the Institute of Information Technology (ITEC) at Alpen-Adria Universität in Klagenfurt, Austria. This license allows users of this dataset to copy, distribute and transmit the work under the following conditions: * Attribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. * Non-Commercial: You may not use the material for commercial purposes. For further legal details, please read the [complete license terms](https://creativecommons.org/licenses/by-nc/4.0/legalcode). For additional information, please visit the [official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html). ### Citation Information ``` @inproceedings{10.1007/978-3-030-37734-2_36, abstract = {Gynecologic laparoscopy as a type of minimally invasive surgery (MIS) is performed via a live feed of a patient's abdomen surveying the insertion and handling of various instruments for conducting treatment. Adopting this kind of surgical intervention not only facilitates a great variety of treatments, the possibility of recording said video streams is as well essential for numerous post-surgical activities, such as treatment planning, case documentation and education. Nonetheless, the process of manually analyzing surgical recordings, as it is carried out in current practice, usually proves tediously time-consuming. In order to improve upon this situation, more sophisticated computer vision as well as machine learning approaches are actively developed. Since most of such approaches heavily rely on sample data, which especially in the medical field is only sparsely available, with this work we publish the Gynecologic Laparoscopy ENdometriosis DAtaset (GLENDA) -- an image dataset containing region-based annotations of a common medical condition named endometriosis, i.e. the dislocation of uterine-like tissue. The dataset is the first of its kind and it has been created in collaboration with leading medical experts in the field.}, address = {Cham}, author = {Leibetseder, Andreas and Kletz, Sabrina and Schoeffmann, Klaus and Keckstein, Simon and Keckstein, J{\"o}rg}, booktitle = {MultiMedia Modeling}, editor = {Ro, Yong Man and Cheng, Wen-Huang and Kim, Junmo and Chu, Wei-Ta and Cui, Peng and Choi, Jung-Woo and Hu, Min-Chun and De Neve, Wesley}, isbn = {978-3-030-37734-2}, pages = {439--450}, publisher = {Springer International Publishing}, title = {GLENDA: Gynecologic Laparoscopy Endometriosis Dataset}, year = {2020} } ```
gigant/tib_transcripts
2023-01-21T13:54:23.000Z
[ "region:us" ]
gigant
null
null
null
0
4
--- dataset_info: features: - name: doi dtype: string - name: transcript dtype: string - name: abstract dtype: string splits: - name: train num_bytes: 251058543 num_examples: 8481 download_size: 130991914 dataset_size: 251058543 --- # Dataset Card for "tib_transcripts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Shunian/kaggle-mbti-cleaned-augmented
2022-12-16T09:46:26.000Z
[ "region:us" ]
Shunian
null
null
null
0
4
--- dataset_info: features: - name: label dtype: int64 - name: text dtype: string splits: - name: train num_bytes: 74489242 num_examples: 478389 - name: test num_bytes: 12922409 num_examples: 81957 download_size: 56815784 dataset_size: 87411651 --- # Dataset Card for "kaggle-mbti-cleaned-augmented" This dataset is built upon [Shunian/kaggle-mbti-cleaned](https://huggingface.co/datasets/Shunian/kaggle-mbti-cleaned) to address the sample imbalance problem. Thanks to the [Parrot Paraphraser](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser) and [NLP AUG](https://github.com/makcedward/nlpaug), some of the skewness issue are addressed in the training data, make it grows from 328,660 samples to 478,389 samples in total. View [GitHub](https://github.com/nogibjj/MBTI-Personality-Test) for more information
fewshot-goes-multilingual/cs_csfd-movie-reviews
2022-12-18T21:30:56.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:cs", "license:cc-by-sa-4.0", "movie reviews", "rat...
fewshot-goes-multilingual
null
null
null
0
4
--- annotations_creators: - crowdsourced language: - cs language_creators: - crowdsourced license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: CSFD movie reviews (Czech) size_categories: - 10K<n<100K source_datasets: - original tags: - movie reviews - rating prediction task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for CSFD movie reviews (Czech) ## Dataset Description The dataset contains user reviews from Czech/Slovak movie databse website <https://csfd.cz>. Each review contains text, rating, date, and basic information about the movie (or TV series). The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced - each rating has approximately the same frequency. ## Dataset Features Each sample contains: - `review_id`: unique string identifier of the review. - `rating_str`: string representation of the rating (from "0/5" to "5/5") - `rating_int`: integer representation of the rating (from 0 to 5) - `date`: date of publishing the review (just date, no time nor timezone) - `comment_language`: language of the review (always "cs") - `comment`: the string of the review - `item_title`: title of the reviewed item - `item_year`: publishing year of the item (string, can also be a range) - `item_kind`: kind of the item - either "film" or "seriál" - `item_genres`: list of genres of the item - `item_directors`: list of director names of the item - `item_screenwriters`: list of screenwriter names of the item - `item_cast`: list of actors and actress in the item ## Dataset Source The data was mined and sampled from the <https://csfd.cz> website. Make sure to comply with the terms of conditions of the website operator when using the data.
razhan/imdb_ckb
2023-01-13T17:41:39.000Z
[ "task_categories:text-classification", "task_ids:sentiment-analysis", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|imdb", "language:ckb", "langu...
razhan
null
null
null
1
4
--- annotations_creators: - expert-generated language: - ckb - ku language_creators: - crowdsourced license: - other multilinguality: - monolingual pretty_name: IMDB_CKB size_categories: - 10K<n<100K source_datasets: - extended|imdb tags: - central kurdish - kurdish - sorani - kurdi task_categories: - text-classification task_ids: - sentiment-analysis - sentiment-classification dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': neg '1': pos config_name: plain_text splits: - name: train num_examples: 24903 - name: test num_examples: 24692 --- # Dataset Card for IMDB Kurdish ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/) - **Repository:** [https://github.com/Hrazhan/IMDB_Kurdish/](https://github.com/Hrazhan/IMDB_Kurdish/) - **Point of Contact:** [Razhan Hameed](https://twitter.com/RazhanHameed) - **Paper:** - **Leaderboard:** ### Dataset Summary Central Kurdish translation of the famous IMDB movie reviews dataset. The dataset contains 50K highly polar movie reviews, divided into two equal classes of positive and negative reviews. We can perform binary sentiment classification using this dataset. The availability of datasets in Kurdish, such as the IMDB movie reviews dataset, can help researchers and developers train and evaluate machine learning models for Kurdish language processing. However, it is important to note that machine learning algorithms can only be as accurate as the data they are trained on (in this case the quality of the translation), so the quality and relevance of the dataset will affect the performance of the resulting model. For more information about the dataset, please go through the following link: http://ai.stanford.edu/~amaas/data/sentiment/ P.S. This dataset is translated with Google Translator. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Central Kurdish ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { "label": 0, "text": ""فیلمێکی زۆر باش، کە سەرنج دەخاتە سەر پرسێکی زۆر گرنگ. نەخۆشی کحولی کۆرپەلە کەموکوڕییەکی زۆر جددی لە لەدایکبوونە کە بە تەواوی دەتوانرێت ڕێگری لێبکرێت. ئەگەر خێزانە زیاترەکان ئەم فیلمە ببینن، ڕەنگە منداڵی زیاتر وەک ئادەم کۆتاییان نەهاتبێت. جیمی سمیس لە یەکێک لە باشترین ڕۆڵەکانیدا نمایش دەکات تا ئێستا. ئەمە فیلمێکی نایاب و باشە کە خێزانێکی زۆر تایبەت لەبەرچاو دەگرێت و پێویستییەکی زۆر گرنگی هەیە. ئەمەش جیاواز نییە لە هەزاران خێزان کە ئەمڕۆ لە ئەمریکا هەن. منداڵان هەن کە لەگەڵ ئەم جیهانەدا خەبات دەکەن. بەڕاستی خاڵە گرنگەکە لێرەدا ئەوەیە کە دەکرا ڕێگری لە هەموو شتێک بکرێت. خەڵکی زیاتر دەبێ ئەم فیلمە ببینن و ئەوەی کە هەیەتی بە جددی وەریبگرێت. بە باشی ئەنجام دراوە، بە پەیامی گرنگ، بە شێوەیەکی بەڕێزانە مامەڵەی لەگەڵ دەکرێت." } ``` ### Data Fields plain_text text: a string feature. label: a classification label, with possible values including neg (0), pos (1). ### Data Splits | name |train|test| |----------|----:|----:| |plain_text|24903|24692| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http://www.aclweb.org/anthology/P11-1015} } ``` ### Contributions Thanks to [Razhan Hameed](https://twitter.com/RazhanHameed) for adding this dataset.
laion/laion5b-h14-index
2023-01-20T00:19:29.000Z
[ "region:us" ]
laion
null
null
null
6
4
Entry not found
aashay96/indic-gpt
2023-04-21T20:45:09.000Z
[ "region:us" ]
aashay96
null
null
null
1
4
Sampled Data from AIforBharat corpora
keremberke/football-object-detection
2023-01-04T20:39:21.000Z
[ "task_categories:object-detection", "roboflow", "region:us" ]
keremberke
null
@misc{ football-player-detection-kucab_dataset, title = { Football-Player-Detection Dataset }, type = { Open Source Dataset }, author = { Augmented Startups }, howpublished = { \\url{ https://universe.roboflow.com/augmented-startups/football-player-detection-kucab } }, url = { https://universe.roboflow.com/augmented-startups/football-player-detection-kucab }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2022-12-29 }, }
null
5
4
--- task_categories: - object-detection tags: - roboflow --- ### Roboflow Dataset Page [https://universe.roboflow.com/augmented-startups/football-player-detection-kucab](https://universe.roboflow.com/augmented-startups/football-player-detection-kucab?ref=roboflow2huggingface) ### Citation ``` @misc{ football-player-detection-kucab_dataset, title = { Football-Player-Detection Dataset }, type = { Open Source Dataset }, author = { Augmented Startups }, howpublished = { \url{ https://universe.roboflow.com/augmented-startups/football-player-detection-kucab } }, url = { https://universe.roboflow.com/augmented-startups/football-player-detection-kucab }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2022-12-29 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on November 21, 2022 at 6:50 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 1232 images. Track-players-and-football are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) No image augmentation techniques were applied.
beardaintweird/quran-embeddings
2023-01-02T16:34:41.000Z
[ "region:us" ]
beardaintweird
null
null
null
0
4
Entry not found
keremberke/construction-safety-object-detection
2023-01-27T13:36:19.000Z
[ "task_categories:object-detection", "roboflow", "roboflow2huggingface", "Construction", "Logistics", "Utilities", "Damage Risk", "Ppe", "Manufacturing", "Assembly Line", "Warehouse", "Factory", "region:us" ]
keremberke
null
@misc{ construction-site-safety_dataset, title = { Construction Site Safety Dataset }, type = { Open Source Dataset }, author = { Roboflow Universe Projects }, howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/construction-site-safety } }, url = { https://universe.roboflow.com/roboflow-universe-projects/construction-site-safety }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2023 }, month = { jan }, note = { visited on 2023-01-26 }, }
null
4
4
--- task_categories: - object-detection tags: - roboflow - roboflow2huggingface - Construction - Logistics - Utilities - Damage Risk - Ppe - Construction - Utilities - Manufacturing - Logistics - Ppe - Assembly Line - Warehouse - Factory --- <div align="center"> <img width="640" alt="keremberke/construction-safety-object-detection" src="https://huggingface.co/datasets/keremberke/construction-safety-object-detection/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['barricade', 'dumpster', 'excavators', 'gloves', 'hardhat', 'mask', 'no-hardhat', 'no-mask', 'no-safety vest', 'person', 'safety net', 'safety shoes', 'safety vest', 'dump truck', 'mini-van', 'truck', 'wheel loader'] ``` ### Number of Images ```json {'train': 307, 'valid': 57, 'test': 34} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/construction-safety-object-detection", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/roboflow-universe-projects/construction-site-safety/dataset/1](https://universe.roboflow.com/roboflow-universe-projects/construction-site-safety/dataset/1?ref=roboflow2huggingface) ### Citation ``` @misc{ construction-site-safety_dataset, title = { Construction Site Safety Dataset }, type = { Open Source Dataset }, author = { Roboflow Universe Projects }, howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/construction-site-safety } }, url = { https://universe.roboflow.com/roboflow-universe-projects/construction-site-safety }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2023 }, month = { jan }, note = { visited on 2023-01-26 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on December 29, 2022 at 11:22 AM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 398 images. Construction are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) No image augmentation techniques were applied.
W4nkel/turkish-sentiment-dataset
2023-01-01T18:07:08.000Z
[ "license:cc-by-sa-4.0", "region:us" ]
W4nkel
null
null
null
0
4
--- license: cc-by-sa-4.0 --- THIS DATASET BASED ON THIS SOURCE: [winvoker/turkish-sentiment-analysis-dataset](https://huggingface.co/datasets/winvoker/turkish-sentiment-analysis-dataset)