author
stringlengths
2
29
cardData
null
citation
stringlengths
0
9.58k
description
stringlengths
0
5.93k
disabled
bool
1 class
downloads
float64
1
1M
gated
bool
2 classes
id
stringlengths
2
108
lastModified
stringlengths
24
24
paperswithcode_id
stringlengths
2
45
private
bool
2 classes
sha
stringlengths
40
40
siblings
list
tags
list
readme_url
stringlengths
57
163
readme
stringlengths
0
977k
florentgbelidji
null
null
null
false
1
false
florentgbelidji/autotrain-data-qa-team-car-review-project
2022-10-25T10:29:30.000Z
null
false
7d0c06fa172853f1eb41358c1c9ec081c478d24a
[]
[ "language:en", "task_categories:text-classification" ]
https://huggingface.co/datasets/florentgbelidji/autotrain-data-qa-team-car-review-project/resolve/main/README.md
--- language: - en task_categories: - text-classification --- # AutoTrain Dataset for project: qa-team-car-review-project ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project qa-team-car-review-project. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": " ", "target": 1 }, { "text": " Mazda truck costs less than the sister look-a-like Ford; Mazda is a \"A\" series of the Ford Ranger, [...]", "target": 2 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=3, names=['great', 'ok', 'poor'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 19731 | | valid | 4935 |
otheng03
null
null
null
false
1
false
otheng03/test1
2022-06-09T12:20:57.000Z
null
false
3a2f92dc83d67d89f1eb1885d1c75961b32722ec
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/otheng03/test1/resolve/main/README.md
--- license: apache-2.0 --- # Title 1 hahahoho
qualitydatalab
null
null
null
false
2
false
qualitydatalab/autotrain-data-car-review-project
2022-10-25T10:29:37.000Z
null
false
9699ef019676b4ae1504e9c156bdb4cfda059bb5
[]
[ "language:en", "task_categories:text-classification" ]
https://huggingface.co/datasets/qualitydatalab/autotrain-data-car-review-project/resolve/main/README.md
--- language: - en task_categories: - text-classification --- # AutoTrain Dataset for project: car-review-project ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project car-review-project. It contains consumer car ratings and reviews from [Edmunds website](https://www.kaggle.com/datasets/ankkur13/edmundsconsumer-car-ratings-and-reviews) ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": " ", "target": 1 }, { "text": " Mazda truck costs less than the sister look-a-like Ford; Mazda is a \"A\" series of the Ford Ranger, [...]", "target": 2 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=3, names=['great', 'ok', 'poor'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 19731 | | valid | 4935 |
alexfabbri
null
null
null
false
70
false
alexfabbri/answersumm
2022-07-01T15:43:54.000Z
null
false
ccaa9971a01947e8e4771b274d97c693e4e5ad52
[]
[ "arxiv:2111.06474", "annotations_creators:found", "language_creators:found", "language:en", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "task_categories:summarization", "task_ids:summarization", "task_ids:query-based-summarizatio...
https://huggingface.co/datasets/alexfabbri/answersumm/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - summarization task_ids: - summarization - query-based-summarization --- # Dataset Card for answersumm ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/Alex-Fabbri/AnswerSumm - **Paper:** [AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization](https://arxiv.org/abs/2111.06474) - **Point of Contact:** [Alex Fabbri](mailto:afabbri@salesforce.com) ### Dataset Summary The AnswerSumm dataset is an English-language dataset of questions and answers collected from a [StackExchange data dump](https://archive.org/details/stackexchange). The dataset was created to support the task of query-focused answer summarization with an emphasis on multi-perspective answers. The dataset consists of over 4200 such question-answer threads annotated by professional linguists and includes over 8700 summaries. We decompose the task into several annotation stages, including sentence selection, sentence clustering, cluster summarization, and overall summarization. For each thread, the annotator writes two summaries, one in which the annotator is asked to mark sentences that are included in the final summary and instructed to more closely use the words in these sentences rather than abstract. We have multiple annotators for a subset of the examples in the test set. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances A data point comprises a question with a `title` field containing the overview of the question and a `question` that elaborates on the title. The answers are sentence tokenized and contain relevance labels, labels for inclusion in the final summary, and cluster labels. We include cluster summaries, overall summaries, and additional metadata. An example from the AnswerSumm test set looks as follows: ```json { "example_id": 9_24, "annotator_id": [1], "question": { "author": "gaming.stackexchange.com/users/11/Jeffrey", "forum": "gaming.stackexchange.com", "link": "gaming.stackexchange.com/questions/1", "question": "Now that the Engineer update has come, there will be lots of Engineers building up everywhere. How should this best be handled?", "question_tags": "\<team-fortress-2\>", "title": "What is a good strategy to deal with lots of engineers turtling on the other team?" }, "answers": [ { "answer_details": { "author": "gaming.stackexchange.com/users/44/Corv1nus", "score": 49 } "sents": [ "text": "Lots of medics with lots of ubers on high-damage-dealing classes." "label": [0], "label_summ": [0], "cluster_id": [[-1]], ] ... }, ... ] "summaries": [ [ "Demomen usually work best against a sentry farm. Heavies or pyros can also be effective. Medics should be in the frontline to absorb the shock. Build a teleporter to help your team through.", "Demomen are best against a sentry farm. Heavies or pyros can also be effective. The medic should lead the uber combo. ..." ] ] "cluster_summaries":[ "Demomen are best against a sentry farm.", "Heavies or pyros can also be effective.", ... ] } ``` ### Data Fields - question: contains metadata about the question and forum - question: the body of the question post - title: the title of the question post - question_tags: user-provided question tags - link: link to the original question - author: link to the author's user page (as requested by StackExchange's attribution policy) - answers: list of sentence-tokenized answers - answer_details: dictionary consisting of link to answer author's user page (author) and community-assigned score (score) - sents: sentences that compose the answer - text: the sentence text - label: a list (to generalize to multi-annotator scenarios) of whether the sentence is labeled as relevant or not for answering the question. - label_summ: a list of whether the sentence was used to write the first annotator-created summary (that is the first summary in `summaries`) - cluster_id: a list of lists (potentially multiple annotators and a sentence can be in potentially multiple clusters) of the clusters a sentence belongs to. -1 implies no cluster. This label can be used to aggregate sentences into clusters across answers. - summaries: list of list of summaries. Each annotator wrote two summaries. The first in the list is the summary in which the instructor was told to mark sentences relevant for inclusion in the summary and then closely use the words of these sentences, while for the second summary the annotator was asked to paraphrase and condense the cluster summaries but was not asked to reduce abstraction. - annotator_id: a list of the ids of the annotator(s) who completed all tasks related to that thread. - mismatch_info: a dict of any issues in processing the excel files on which annotations were completed. - rel_sent_not_in_cluster: list of booleans indicating whether there are sentences that are labeled as relevant but were not included in a cluster. - cluster_sents_not_matched: list of sentences that were found in a cluster but which our processing script didn't automatically match to sentences in the source answers. If cluster summarization is of interest to the user you may want to process these examples separately using clusters_orig. ### Data Splits The data is split into training, validation, and test sets using stratified sampling on the source forums. There are 2783, 500, and 1000 train/validation/test threads, respectively. ## Dataset Creation ### Curation Rationale AnswerSumm was built to provide a testbed for query-focused summarization of multi-perspective answers. The data collection was designed to tackle multiple subtasks including sentence selection, clustering, cluster summarization, and overall summarization. ### Source Data #### Initial Data Collection and Normalization The data was obtained by filtering examples based on a whitelist of forums from StackExchange which we believed would be able to be summarized by a lay person. We describe. We asked annotators to remove examples which required technical knowledge or additional context beyond what was present in the answers. #### Who are the source language producers? The language producers are the users of the StackExchange forums sampled. ### Annotations #### Annotation process Please see our [paper](https://arxiv.org/pdf/2111.06474.pdf) for additional annotation details. We began with a pre-pilot of 50 examples, followed by a pilot of 500 and a final annotation of 5000 examples. This release contains the results of the final data collection. We will release the instructions used in data collection. #### Who are the annotators? The annotators are professional linguists who were obtained through an internal contractor. ### Personal and Sensitive Information We did not anonymize the data. We followed the specifications from StackExchange [here](https://archive.org/details/stackexchange) to include author information. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop systems that automatically summarize multi-perspective answers. A system that succeeds at this task would be able to summarize many perspectives present in an answer and not limit itself to a single perspective. ### Discussion of Biases While StackExchange allows for the exchange of information and ideas, hate and harassment may exist on this site. While our annotators did not flag examples in this process, we encourage users of the dataset to reach out with concerns. We also note that this dataset is limited in its monolingual coverage. ## Additional Information ### Dataset Curators The dataset was collected by Alex Fabbri, Xiaojian Wu, Srini Iyer, Haoran Li, and Mona Diab during work done at Facebook. ### Licensing Information The data is released under cc-by-sa 4.0 following the original StackExchange [release](https://archive.org/details/stackexchange). ### Citation Information ```bibtex @misc{fabbri-etal-2022-answersumm, title={AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization}, author={Alexander R. Fabbri and Xiaojian Wu and Srini Iyer and Haoran Li and Mona Diab }, year={2022}, eprint={2111.06474}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2111.06474} } ```
MrClean
null
null
null
false
1
false
MrClean/Dalleproject
2022-06-09T18:33:12.000Z
null
false
4afddd9cc59089a6a59650cd847e1650be1e5399
[]
[ "title:DALL·E mini", "emoji:🥑", "colorFrom:yellow", "colorTo:green", "sdk:static", "license:apache-2.0" ]
https://huggingface.co/datasets/MrClean/Dalleproject/resolve/main/README.md
--- title: DALL·E mini emoji: 🥑 colorFrom: yellow colorTo: green sdk: static pinned: True license: apache-2.0 ---
Impostor
null
null
null
false
1
false
Impostor/Pixel
2022-06-09T21:15:33.000Z
null
false
07eeed48418a6392700eda3bba5d3eb077036864
[]
[ "license:cc-by-4.0" ]
https://huggingface.co/datasets/Impostor/Pixel/resolve/main/README.md
--- license: cc-by-4.0 --- https://github.com/eladrich/pixel2style2pixel.git
allenai
null
null
null
false
1
false
allenai/mup-full
2022-10-25T10:29:44.000Z
null
false
1968c2e5f786501e647c46386dac435e5babd32d
[]
[ "license:odc-by" ]
https://huggingface.co/datasets/allenai/mup-full/resolve/main/README.md
--- license: - odc-by --- # MuP - Multi Perspective Scientific Document Summarization Generating summaries of scientific documents is known to be a challenging task. Majority of existing work in summarization assumes only one single best gold summary for each given document. Having only one gold summary negatively impacts our ability to evaluate the quality of summarization systems as writing summaries is a subjective activity. At the same time, annotating multiple gold summaries for scientific documents can be extremely expensive as it requires domain experts to read and understand long scientific documents. This shared task will enable exploring methods for generating multi-perspective summaries. We introduce a novel summarization corpus, leveraging data from scientific peer reviews to capture diverse perspectives from the reader's point of view. For more information about the dataset please refer to: https://github.com/allenai/mup
NbAiLab
null
null
Newspaperimages is a dataset of top part of images of newspaper pages from selected newspapers
false
9
false
NbAiLab/newspapertop
2022-06-17T18:47:11.000Z
null
false
79ae9ebe9d8398dcb93c5a6d72c191ae3f490335
[]
[]
https://huggingface.co/datasets/NbAiLab/newspapertop/resolve/main/README.md
Nart
null
null
null
false
1
false
Nart/parallel-ab-ru
2022-09-06T10:11:37.000Z
null
false
bf786300c71d275931fcb545976880c962a877bc
[]
[ "language_creators:expert-generated", "language:ab", "language:ru", "license:cc0-1.0", "multilinguality:translation", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "task_categories:text-generation", "task_ids:translation" ]
https://huggingface.co/datasets/Nart/parallel-ab-ru/resolve/main/README.md
--- language_creators: - expert-generated language: - ab - ru license: - cc0-1.0 multilinguality: - translation - multilingual pretty_name: Abkhazian Russian parallel corpus size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-generation task_ids: - translation --- ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Other Known Limitations](#other-known-limitations) ## Dataset Description - **Point of Contact:** [Nart Tlisha](mailto:daniel.abzakh@gmail.com) - **Size of the generated dataset:** 33.5 MB ### Dataset Summary The Abkhaz Russian parallel corpus dataset is a collection of 205,665 sentences/words extracted from different sources; e-books, web scrapping. ## Dataset Creation ### Source Data Here is a link to the source on [github](https://github.com/danielinux7/Multilingual-Parallel-Corpus/blob/master/references.md) ## Considerations for Using the Data ### Other Known Limitations The accuracy of the dataset is around 95% (gramatical, arthographical errors)
sagantime
null
null
null
false
6
false
sagantime/NBAData
2022-06-10T13:55:59.000Z
null
false
4ac984b4226120f96ea25525647a997cf73f76a6
[]
[]
https://huggingface.co/datasets/sagantime/NBAData/resolve/main/README.md
SetFit
null
null
null
false
381
false
SetFit/wsc_fixed
2022-06-10T13:55:19.000Z
null
false
33313bc359ab206b3dedc32c3017ba5fc2b26a78
[]
[]
https://huggingface.co/datasets/SetFit/wsc_fixed/resolve/main/README.md
# Glue WSC Fixed This dataset is a port of the official [`wsc.fixed` dataset](https://huggingface.co/datasets/super_glue/viewer/wsc.fixed/train) on the Hub. Also, the test split is not labeled; the label column values are always -1.
sudo-s
null
null
null
false
1
false
sudo-s/datasetexample
2022-06-10T13:57:31.000Z
null
false
4c95d91e7feaf48078405d0ba520f3da9de3d23d
[]
[]
https://huggingface.co/datasets/sudo-s/datasetexample/resolve/main/README.md
SetFit
null
null
null
false
6
false
SetFit/wsc
2022-06-10T13:59:09.000Z
null
false
8694ce7ea420cbcce8a7e4316bfebce9ee4a0665
[]
[]
https://huggingface.co/datasets/SetFit/wsc/resolve/main/README.md
# Glue WSC This dataset is a port of the official [`wsc` dataset](https://huggingface.co/datasets/super_glue) on the Hub. Also, the test split is not labeled; the label column values are always -1.
SetFit
null
null
null
false
52
false
SetFit/CR
2022-06-21T09:04:33.000Z
null
false
01e427e689e9d3a9097f85eab7a91ce937cf5f98
[]
[]
https://huggingface.co/datasets/SetFit/CR/resolve/main/README.md
# Customer Reviews This dataset is a port of the official [`CR` dataset](https://github.com/hiyouga/Dual-Contrastive-Learning/tree/main/data) from [this paper](https://www.cs.uic.edu/~liub/FBS/opinion-mining-final-WSDM.pdf). There is no validation split.
pere
null
null
\\nItalian tweets.
false
1
false
pere/italian_tweets_10M
2022-06-12T18:26:39.000Z
null
false
ddd2bbc0e2119770e28033421296e74818981e33
[]
[]
https://huggingface.co/datasets/pere/italian_tweets_10M/resolve/main/README.md
# Italian Tweets Test Dataset This is a dataset with 10M italian tweets. It still contains errors. Please do not use. ## How to Use ```python from datasets import load_dataset data = load_dataset("pere/italian_tweets_10M") ```
Theivaprakasham
null
@article{Sun2021SpatialDG, title={Spatial Dual-Modality Graph Reasoning for Key Information Extraction}, author={Hongbin Sun and Zhanghui Kuang and Xiaoyu Yue and Chenhao Lin and Wayne Zhang}, journal={ArXiv}, year={2021}, volume={abs/2103.14470} }
WildReceipt is a collection of receipts. It contains, for each photo, a list of OCRs - with the bounding box, text, and class. It contains 1765 photos, with 25 classes, and 50000 text boxes. The goal is to benchmark "key information extraction" - extracting key information from documents https://arxiv.org/abs/2103.14470
false
71
false
Theivaprakasham/wildreceipt
2022-06-10T21:46:37.000Z
null
false
05f68a2dbe784d24da08c4f35fda61a86a21d2e6
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/Theivaprakasham/wildreceipt/resolve/main/README.md
--- license: apache-2.0 ---
ggnm
null
null
null
false
1
false
ggnm/ewww
2022-06-10T20:08:10.000Z
null
false
37a6f25b900d852495080624a3432b5cc231bc9e
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/ggnm/ewww/resolve/main/README.md
--- license: afl-3.0 ---
ggnm
null
null
null
false
2
false
ggnm/grdgytygh
2022-06-10T20:10:23.000Z
null
false
933704c272f00bd518ade032a0b1cac1c80a0938
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/ggnm/grdgytygh/resolve/main/README.md
--- license: afl-3.0 ---
deusprofano
null
null
null
false
1
false
deusprofano/images
2022-06-10T23:23:23.000Z
null
false
f7c998cc409db7ac860b5b84ef6a3ea53ae56954
[]
[ "license:other" ]
https://huggingface.co/datasets/deusprofano/images/resolve/main/README.md
--- license: other ---
ReverseThings
null
null
null
false
1
false
ReverseThings/lol
2022-06-11T03:24:01.000Z
null
false
cd2b1b4ea8112bdae1f80ae74d290715833d4169
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/ReverseThings/lol/resolve/main/README.md
--- license: afl-3.0 ---
Wendigofucker
null
null
null
false
1
false
Wendigofucker/GeneratedHorror
2022-06-11T03:55:07.000Z
null
false
439a75851cb739c75b7179a3d1d9a17e7224b8f1
[]
[ "license:other" ]
https://huggingface.co/datasets/Wendigofucker/GeneratedHorror/resolve/main/README.md
--- license: other ---
khalidalt
null
null
null
false
1
false
khalidalt/ultimate_arabic_news
2022-06-15T14:46:10.000Z
null
false
49db1aafbad19ee8a494342f74c1a640b5a70e75
[]
[]
https://huggingface.co/datasets/khalidalt/ultimate_arabic_news/resolve/main/README.md
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Ultimate Arabic News Dataset is a collection of single-label modern Arabic texts that are used in news websites and press articles. Arabic news data was collected by web scraping techniques from many famous news sites such as Al-Arabiya, Al-Youm Al-Sabea (Youm7), the news published on the Google search engine and other various sources. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information license: cc-by-4.0 ### Citation Information ``` @book{url, author = {Al-Dulaimi, Ahmed Hashim}, year = {2022}, month = {05}, website = {Mendeley Data, V1}, title = {Ultimate Arabic News Dataset}, doi = {10.17632/jz56k5wxz7.1} } ``` ### Contributions [More Information Needed]
noob123
null
null
null
false
1
false
noob123/small_augemented_nlp_dataset
2022-06-11T06:23:35.000Z
null
false
9a6a5918ca709882044f23470120423e0297d986
[]
[ "license:other" ]
https://huggingface.co/datasets/noob123/small_augemented_nlp_dataset/resolve/main/README.md
--- license: other ---
olivierdehaene
null
null
null
false
1
false
olivierdehaene/xkcd
2022-10-25T10:31:55.000Z
null
false
674d842241096b770b86bf5c69ac85d7a68a5fc3
[]
[ "language_creators:other", "language:en", "license:cc-by-sa-3.0", "license:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "task_categories:image-to-text", "task_categories:feature-extraction" ]
https://huggingface.co/datasets/olivierdehaene/xkcd/resolve/main/README.md
--- annotations_creators: [] language_creators: - other language: - en license: - cc-by-sa-3.0 - other multilinguality: - monolingual pretty_name: XKCD size_categories: - 1K<n<10K source_datasets: [] task_categories: - image-to-text - feature-extraction task_ids: [] --- # Dataset Card for "XKCD" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://xkcd.com/](https://xkcd.com/), [https://www.explainxkcd.com](https://www.explainxkcd.com) - **Repository:** [Hugging Face repository](https://huggingface.co/datasets/olivierdehaene/xkcd/tree/main) ### Dataset Summary XKCD is an export of all XKCD comics with their transcript and explanation scrapped from [https://explainxkcd.com](https://explainxkcd.com). ## Dataset Structure ### Data Instances - `id`: `1` - `title`: `Barrel - Part 1` - `image_title`: `Barrel - Part 1` - `url`: `https://www.xkcd.com/1` - `image_url`: `https://imgs.xkcd.com/comics/barrel_cropped_(1).jpg` - `explained_url`: `https://www.explainxkcd.com/wiki/index.php/1:_Barrel_-_Part_1` - `transcript`: `[A boy sits in a barrel which is floating in an ocean.] Boy: i wonder where i'll float next? [A smaller frame with a zoom out of the boy in the barrel seen from afar. The barrel drifts into the distance. Nothing else can be seen.]` - `explanation`: `The comic shows a young boy floating in a barrel in an ocean that doesn't have a visible end. It comments on the unlikely optimism and perhaps naïveté people sometimes display. The boy is completely lost and seems hopelessly alone, without any plan or control of the situation. Yet, rather than afraid or worried, he is instead quietly curious: "I wonder where I'll float next?" Although not necessarily the situation in this comic, this is a behavior people often exhibit when there is nothing they can do about a problematic situation for a long time; they may have given up hope or developed a cavalier attitude as a coping mechanism. The title text expands on the philosophical content, with the boy representing the average human being: wandering through life with no real plan, quietly optimistic, always opportunistic and clueless as to what the future may hold. The isolation of the boy may also represent the way in which we often feel lost through life, never knowing quite where we are, believing that there is no one to whom to turn. This comic could also reflect on Randall's feelings towards creating xkcd in the first place; unsure of what direction the web comic would turn towards, but hopeful that it would eventually become the popular web comic that we know today. This is the first in a six-part series of comics whose parts were randomly published during the first several dozen strips. The series features a character that is not consistent with what would quickly become the xkcd stick figure style. The character is in a barrel. In 1110: Click and Drag there is a reference to this comic at 1 North, 48 East . After Randall released the full The Boy and his Barrel story on xkcd, it has been clear that the original Ferret story should also be included as part of the barrel series. The full series can be found here . They are listed below in the order Randall chose for the short story above: ` ### Data Fields - `id` - `title` - `url`: xkcd.com URL - `image_url` - `explained_url`: explainxkcd.com URL - `transcript`: english text transcript of the comic - `explanation`: english explanation of the comic ## Dataset Creation The dataset was scrapped from both explainxkcd.com and xkcd.com. The dataset is therefore licensed under the Creative Commons Attribution-ShareAlike 3.0 license for the `transcript` and `explanation` fields, while the image itself is licensed under the Creative Commons Attribution-NonCommercial 2.5 license. See the [Copyrights](https://www.explainxkcd.com/wiki/index.php/explain_xkcd:Copyrights) page from explainxkcd.com for more explanations. ### Update You can update the dataset by using the `scrapper.py` script. First install the dependencies: ```bash pip install aiolimiter aiohttp beautifulsoup4 pandas ``` Then run the script: ```bash python scrapper.py ``` ## Considerations for Using the Data As the data was scrapped, it is entirely possible that some fields are missing part of the original data. ## Additional Information ### Licensing Information The dataset is licensed under the Creative Commons Attribution-ShareAlike 3.0 license for the `transcript` and `explanation` fields, while the images are licensed under the Creative Commons Attribution-NonCommercial 2.5 license. ### Contributions Thanks to [@OlivierDehaene](https://github.com/OlivierDehaene) for adding this dataset.
TencentMedicalNet
null
null
null
false
1
false
TencentMedicalNet/MRBrains18
2022-06-12T01:08:51.000Z
null
false
9961dbe05d583ef6a90e0556be01d4d883c4e259
[]
[ "license:mit" ]
https://huggingface.co/datasets/TencentMedicalNet/MRBrains18/resolve/main/README.md
--- license: mit ---
trustwallet
null
null
null
false
1
false
trustwallet/22
2022-06-12T03:19:16.000Z
null
false
fd6d6a3b6083df02c5f814accda8bfff60c6b5e8
[]
[ "license:artistic-2.0" ]
https://huggingface.co/datasets/trustwallet/22/resolve/main/README.md
--- license: artistic-2.0 --- crypto Trust**wallet customer service Support Number +**1-**818-869-**2884
trustwallet
null
null
null
false
1
false
trustwallet/24
2022-06-12T03:35:25.000Z
null
false
3552e9fe7befc0953a0e05dfd23c9b7a43dc6d09
[]
[ "license:artistic-2.0" ]
https://huggingface.co/datasets/trustwallet/24/resolve/main/README.md
--- license: artistic-2.0 --- crypto Trust**wallet customer service Support Number +**1-**818-869-**2884
psyche
null
null
null
false
749
false
psyche/kowiki
2022-07-15T15:32:42.000Z
null
false
15bec1c1f30ccb7c843c0c51540321d2d0d75b4f
[]
[ "language:ko", "license:apache-2.0" ]
https://huggingface.co/datasets/psyche/kowiki/resolve/main/README.md
--- language: - ko license: - apache-2.0 ---
jessedvixen
null
null
null
false
1
false
jessedvixen/obama
2022-06-12T04:54:06.000Z
null
false
2685ddade4212f8589466a9ff9aaad55149400d7
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/jessedvixen/obama/resolve/main/README.md
--- license: afl-3.0 ---
pnrr
null
null
null
false
1
false
pnrr/data-turkish-class
2022-07-01T20:02:52.000Z
turkish-reviews
false
5f33dccd14abe8215eaa36367a4b69a838344c14
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:tr", "license:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:text-classification", "task_ids:sentiment-classification" ]
https://huggingface.co/datasets/pnrr/data-turkish-class/resolve/main/README.md
--- pretty_name: Turkish_data annotations_creators: - expert-generated language_creators: - expert-generated language: - tr license: - other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: turkish-reviews train-eval-index: - config: plain_text task: text-classification task_id: binary_classification col_mapping: text: text label: target ---
Jerimee
null
null
null
false
1
false
Jerimee/sobriquet
2022-06-13T22:17:48.000Z
null
false
6b9bd3c7b586bb335e0071e37aedd8c036643730
[]
[ "license:cc0-1.0" ]
https://huggingface.co/datasets/Jerimee/sobriquet/resolve/main/README.md
--- license: cc0-1.0 --- This is my first dataset. I intend for it to contain a list of given names. Some of the them will be silly ("goblin names") - the type an ogre or a fairy might have in a children's story or fantasy novel. The rest will be more mundane. How do I get the dataviewer to work? https://huggingface.co/datasets/sudo-s/example1 {"Jerimee--sobriquet": {"description": "1200+ names, about a third of them are silly names like a goblin might have", "license": "cc0-1.0", "features": {"Type": {"dtype": "string", "id": null, "_type": "Value"}, "Name": {"dtype": "string", "id": null, "_type": "Value"}, "Bool": {"dtype": "int64", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": null, "config_name": null, "version": null, "download_checksums": null, "download_size": , "post_processing_size": null, "dataset_size": , "size_in_bytes":
sagot
null
@inproceedings{sagot:inria-00521242, TITLE = {{The Lefff, a freely available and large-coverage morphological and syntactic lexicon for French}}, AUTHOR = {Sagot, Beno{\^i}t}, URL = {https://hal.inria.fr/inria-00521242}, BOOKTITLE = {{7th international conference on Language Resources and Evaluation (LREC 2010)}}, ADDRESS = {Valletta, Malta}, YEAR = {2010}, MONTH = May, PDF = {https://hal.inria.fr/inria-00521242/file/lrec10lefff.pdf}, HAL_ID = {inria-00521242}, HAL_VERSION = {v1}, }
The lefff-morpho dataset gives access to the morphological information, in both its original format and the UniMorph format.
false
13
false
sagot/lefff_morpho
2022-07-23T15:52:46.000Z
null
false
9f599f415567235036fe3355b3f96c93f254d043
[]
[ "license:lgpl-lr" ]
https://huggingface.co/datasets/sagot/lefff_morpho/resolve/main/README.md
--- license: lgpl-lr --- # Dataset Card for lefff morpho ## Dataset Description - **Homepage:** [http://almanach.inria.fr/software_and_resources/custom/Alexina-en.html](http://almanach.inria.fr/software_and_resources/custom/Alexina-en.html) - **Repository:** [https://gitlab.inria.fr/almanach/alexina/lefff](https://gitlab.inria.fr/almanach/alexina/lefff) - **Paper:** [http://www.lrec-conf.org/proceedings/lrec2010/pdf/701_Paper.pdf](http://www.lrec-conf.org/proceedings/lrec2010/pdf/701_Paper.pdf) - **Point of Contact:** [Benoît Sagot](benoit.sagot@inria.fr) ### Dataset Summary The Lefff, currently in its 3.5 version, is one of the main morphological and syntactic lexicons for French. This Hugging Face dataset provides an easy access to the extensional morphological information in the Lefff, i.e. to the 4-uples (form, lemma, category, morphosyntactic features) and to the amalgams (e.g. _aux_ = _à_ + _les_) it contains. Category and morphosyntactic features are provided both in the original Lefff format and following the UniMorph guidelines. ### Languages French ## Dataset Creation The main author of the resource is Benoît Sagot (Inria, France). Please refer to the main paper and other Lefff-related papers for details. ## Additional Information ### Licensing Information The dataset, as the whole Lefff, is distributed under the LGPL-LR licence. ### Citation Information The main paper regarding the Lefff can be found [here](https://aclanthology.org/L10-1487/). Here is the BibTeX entry for the paper: ``` @inproceedings{sagot:inria-00521242, TITLE = {{The Lefff, a freely available and large-coverage morphological and syntactic lexicon for French}}, AUTHOR = {Sagot, Beno{\^i}t}, URL = {https://hal.inria.fr/inria-00521242}, BOOKTITLE = {{7th international conference on Language Resources and Evaluation (LREC 2010)}}, ADDRESS = {Valletta, Malta}, YEAR = {2010}, MONTH = May, PDF = {https://hal.inria.fr/inria-00521242/file/lrec10lefff.pdf}, HAL_ID = {inria-00521242}, HAL_VERSION = {v1}, } ``` For specific parts of speech or other parts of the lexicon, please cite the corresponding papers whenever relevant.
smilerip
null
null
null
false
1
false
smilerip/smileip
2022-06-12T20:14:32.000Z
null
false
5936dd3e7d7b719add14df3ecb17dd1580d3f3c1
[]
[ "license:other" ]
https://huggingface.co/datasets/smilerip/smileip/resolve/main/README.md
--- license: other ---
arjundd
null
null
null
false
1
false
arjundd/mridata-stanford-knee-3d-fse
2022-06-12T22:09:03.000Z
null
false
386707653616d0f84850b6d8f09efcfa5433e964
[]
[ "license:cc-by-nc-4.0" ]
https://huggingface.co/datasets/arjundd/mridata-stanford-knee-3d-fse/resolve/main/README.md
--- license: cc-by-nc-4.0 ---
espejelomar
null
null
null
false
1
false
espejelomar/my_embeddings
2022-06-12T20:55:50.000Z
null
false
a9f14e68beea403d52c15c998e2c6f301538b760
[]
[ "license:mit" ]
https://huggingface.co/datasets/espejelomar/my_embeddings/resolve/main/README.md
--- license: mit ---
amueller
null
@inproceedings{mueller-etal-2022-coloring, title = "Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models", author = "Mueller, Aaron and Frank, Robert and Linzen, Tal and Wang, Luheng and Schuster, Sebastian", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.106", doi = "10.18653/v1/2022.findings-acl.106", pages = "1352--1368", }
This is the dataset used for Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models.
false
1
false
amueller/syntactic_transformations
2022-10-23T06:11:48.000Z
null
false
9cd5b2f912bc15370f3c951f780654a513da2e10
[]
[ "annotations_creators:no-annotation", "language_creators:found", "language:en", "language:de", "license:mit", "multilinguality:2 languages", "size_categories:100K<n<1M", "source_datasets:original", "task_ids:syntactic-transformations" ]
https://huggingface.co/datasets/amueller/syntactic_transformations/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - found language: - en - de license: - mit multilinguality: - 2 languages size_categories: - 100K<n<1M source_datasets: - original task_categories: - syntactic-evaluation task_ids: - syntactic-transformations --- # Dataset Card for syntactic_transformations ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/sebschu/multilingual-transformations - **Paper:** [Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models](https://aclanthology.org/2022.findings-acl.106/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Aaron Mueller](mailto:amueller@jhu.edu) ### Dataset Summary This contains the the syntactic transformations datasets used in [Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models](https://aclanthology.org/2022.findings-acl.106/). It consists of English and German question formation and passivization transformations. This dataset also contains zero-shot cross-lingual transfer training and evaluation data. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English and German. ## Dataset Structure ### Data Instances A typical data point consists of a source sequence ("src"), a target sequence ("tgt"), and a task prefix ("prefix"). The prefix indicates whether a given sequence should be kept the same in the target (indicated by the "decl:" prefix) or transformed into a question/passive ("quest:"/"passiv:", respectively). An example follows: {"src": "the yak has entertained the walruses that have amused the newt.", "tgt": "has the yak entertained the walruses that have amused the newt?", "prefix": "quest: " } ### Data Fields - src: the original source sequence. - tgt: the transformed target sequence. - prefix: indicates which transformation to perform to map from the source to target sequences. ### Data Splits The datasets are split into training, dev, test, and gen ("generalization") sets. The training sets are for fine-tuning the model. The dev and test sets are for evaluating model abilities on in-domain transformations. The generalization sets are for evaluating the inductive biases of the model. NOTE: for the zero-shot cross-lingual transfer datasets, the generalization sets are split into in-domain and out-of-domain syntactic structures. For in-domain transformations, use "gen_rc_o" for question formation or "gen_pp_o" for passivization. For out-of-domain transformations, use "gen_rc_s" for question formation or "gen_pp_s" for passivization. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
brikles
null
null
null
false
1
false
brikles/yea
2022-06-13T07:09:38.000Z
null
false
ddcc58d29d82e00d0ecdc3eee5d100d4b05cc49c
[]
[ "license:other" ]
https://huggingface.co/datasets/brikles/yea/resolve/main/README.md
--- license: other ---
Timtel
null
null
null
false
1
false
Timtel/autotrain-data-Botm
2022-06-13T08:53:38.000Z
null
false
36b25d29dcc966610f53f7bb0a9dabcee3844a47
[]
[]
https://huggingface.co/datasets/Timtel/autotrain-data-Botm/resolve/main/README.md
111
ai4bharat
null
\
\
false
145
false
ai4bharat/naamapadam
2022-09-23T05:35:48.000Z
null
false
d7d473f3c3a376c61482d92852ae540144fc6723
[]
[ "annotations_creators:machine-generated", "language_creators:machine-generated", "language:as", "language:bn", "language:gu", "language:hi", "language:kn", "language:ml", "language:mr", "language:or", "language:pa", "language:ta", "language:te", "license:cc0-1.0", "multilinguality:multil...
https://huggingface.co/datasets/ai4bharat/naamapadam/resolve/main/README.md
--- annotations_creators: - machine-generated language_creators: - machine-generated language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te license: - cc0-1.0 multilinguality: - multilingual pretty_name: naamapadam size_categories: - 1M<n<10M source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition --- # Dataset Card for naamapadam ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/AI4Bharat/indicner - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** Anoop Kunchukuttan ### Dataset Summary Naamapadam is the largest publicly available Named Entity Annotated dataset for 11 Indic languages. This corpora was created by projecting named entities from English side to the Indic language side of the English-Indic languages parallel corpus. The dataset additionally contains manually labelled test set for 5 Indic languages containing 500-1000 sentences. ### Supported Tasks and Leaderboards **Tasks:** NER on Indian languages. **Leaderboards:** Currently there is no Leaderboard for this dataset. ### Languages - `Assamese (as)` - `Bengali (bn)` - `Gujarati (gu)` - `Kannada (kn)` - `Hindi (hi)` - `Malayalam (ml)` - `Marathi (mr)` - `Oriya (or)` - `Punjabi (pa)` - `Tamil (ta)` - `Telugu (te)` ## Dataset Structure ### Data Instances {'words': ['उन्हेनें', 'शिकांगों','में','बोरोडिन','की','पत्नी','को','तथा','वाशिंगटन','में','रूसी','व्यापार','संघ','को','पैसे','भेजे','।'], 'ner': [0, 3, 0, 1, 0, 0, 0, 0, 3, 0, 5, 6, 6, 0, 0, 0, 0], } ### Data Fields - `words`: Raw tokens in the dataset. - `ner`: the NER tags for this dataset. ### Data Splits | Language | Train | Validation | Test | |---:|---:|---:|---:| | as | 10266 | 52 | 51 | | bn | 961679 | 4859 | 607 | | gu | 472845 | 2389 | 50 | | hi | 985787 | 13460 | 437 | | kn | 471763 | 2381 | 1019 | | ml | 716652 | 3618 | 974 | | mr | 455248 | 2300 | 1080 | | or | 196793 | 993 | 994 | | pa | 463534 | 2340 | 2342 | | ta | 497882 | 2795 | 49 | | te | 507741 | 2700 | 53 | ## Usage You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip: ```code pip install datasets ``` To use the dataset, please use:<br/> ```python from datasets import load_dataset hiner = load_dataset('ai4bharat/naamapadam') ``` ## Dataset Creation We use the parallel corpus from the Samanantar Dataset between English and the 11 major Indian languages to create the NER dataset. We annotate the English portion of the parallel corpus with existing state-of-the-art NER model. We use word-level alignments learned from the parallel corpus to project the entity labels from English to the Indian language. ### Curation Rationale naamapadam was built from [Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/). This dataset was built for the task of Named Entity Recognition in Indic languages. The dataset was introduced to introduce new resources to the Indic languages language that was under-served for Natural Language Processing. ### Source Data [Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/) #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process NER annotations were done following the CoNLL-2003 guidelines. #### Who are the annotators? The annotations for the testset have been done by volunteers who are proficient in the respective languages. We would like to thank all the volunteers: - Anil Mhaske - Anoop Kunchukuttan - Archana Mhaske - Arnav Mhaske - Gowtham Ramesh - Harshit Kedia - Nitin Kedia - Rudramurthy V - Sangeeta Rajagopal - Sumanth Doddapaneni - Vindhya DS - Yash Madhani ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to provide a large-scale Named Entity Recognition dataset for Indic languages. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data. ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information <!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/"> <img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" /> <img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/> </a> <br/> --> **CC0 License Statement** <a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/"> <img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/> </a> <br> <br> - We do not own any of the text from which this data has been extracted. - We license the actual packaging of the mined data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0). - To the extent possible under law, <a rel="dct:publisher" href="https://indicnlp.ai4bharat.org/aksharantar/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Naamapadam</span> manually collected data and existing sources. - This work is published from: India. ### Citation Information If you are using the Naampadam corpus, please cite the following article: ``` @misc{mhaske2022indicner, title={Naamapadam: A Large-Scale Named Entity Annotated Data for Indic Languages}, author={Arnav Mhaske, Harshit Kedia, Rudramurthy. V, Anoop Kunchukuttan, Pratyush Kumar, Mitesh Khapra}, year={2022}, eprint={to be published soon}, } ```
grawcse
null
null
null
false
1
false
grawcse/Sinhala_Facebook_posts_sentence_embeddings
2022-06-13T11:27:50.000Z
null
false
7221eb63317c7d65d7f9c3a1a58fbff10451e510
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/grawcse/Sinhala_Facebook_posts_sentence_embeddings/resolve/main/README.md
--- license: apache-2.0 ---
lewtun
null
@InProceedings{huggingface:dataset, title = {A great new dataset}, author={huggingface, Inc. }, year={2020} }
false
1
false
lewtun/raft-test-submission
2022-06-13T12:08:43.000Z
null
false
a4d97d3e9333b1754ff79f4a8f0baf62a9a50a44
[]
[ "benchmark:raft", "type:prediction", "submission_name:Test submission 0" ]
https://huggingface.co/datasets/lewtun/raft-test-submission/resolve/main/README.md
--- benchmark: raft type: prediction submission_name: Test submission 0 --- # RAFT submissions for raft-test-submission ## Submitting to the leaderboard To make a submission to the [leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard), there are three main steps: 1. Generate predictions on the unlabeled test set of each task 2. Validate the predictions are compatible with the evaluation framework 3. Push the predictions to the Hub! See the instructions below for more details. ### Rules 1. To prevent overfitting to the public leaderboard, we only evaluate **one submission per week**. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week. 2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed. 3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted. 4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches. ### Submission file format For each task in RAFT, you should create a CSV file called `predictions.csv` with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns: * ID (int) * Label (string) See the dummy predictions in the `data` folder for examples with the expected format. Here is a simple example that creates a majority-class baseline: ```python from pathlib import Path import pandas as pd from collections import Counter from datasets import load_dataset, get_dataset_config_names tasks = get_dataset_config_names("ought/raft") for task in tasks: # Load dataset raft_subset = load_dataset("ought/raft", task) # Compute majority class over training set counter = Counter(raft_subset["train"]["Label"]) majority_class = counter.most_common(1)[0][0] # Load predictions file preds = pd.read_csv(f"data/{task}/predictions.csv") # Convert label IDs to label names preds["Label"] = raft_subset["train"].features["Label"].int2str(majority_class) # Save predictions preds.to_csv(f"data/{task}/predictions.csv", index=False) ``` As you can see in the example, each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following: ``` data ├── ade_corpus_v2 │ ├── predictions.csv │ └── task.json ├── banking_77 │ ├── predictions.csv │ └── task.json ├── neurips_impact_statement_risks │ ├── predictions.csv │ └── task.json ├── one_stop_english │ ├── predictions.csv │ └── task.json ├── overruling │ ├── predictions.csv │ └── task.json ├── semiconductor_org_types │ ├── predictions.csv │ └── task.json ├── systematic_review_inclusion │ ├── predictions.csv │ └── task.json ├── tai_safety_research │ ├── predictions.csv │ └── task.json ├── terms_of_service │ ├── predictions.csv │ └── task.json ├── tweet_eval_hate │ ├── predictions.csv │ └── task.json └── twitter_complaints ├── predictions.csv └── task.json ``` ### Validate your submission To ensure that your submission files are correctly formatted, run the following command from the root of the repository: ``` python cli.py validate ``` If everything is correct, you should see the following message: ``` All submission files validated! ✨ 🚀 ✨ Now you can make a submission 🤗 ``` ### Push your submission to the Hugging Face Hub! The final step is to commit your files and push them to the Hub: ``` python cli.py submit ``` If there are no errors, you should see the following message: ``` Submission successful! 🎉 🥳 🎉 Your submission will be evaulated on Sunday 05 September 2021 ⏳ ``` where the evaluation is run every Sunday and your results will be visible on the leaderboard.
Rodekool
null
null
still a WIP, Dataset originally comes from Open Data van de Rechtspraak"
false
1
false
Rodekool/ornl
2022-07-01T21:01:33.000Z
null
false
c47bac1b793a31b0f3bdd912f407a5f64646d674
[]
[ "license:mit" ]
https://huggingface.co/datasets/Rodekool/ornl/resolve/main/README.md
--- license: mit --- Currently, a work in progress to publish a modified subset of the openrechtspraak.nl dataset for NLP
daokang
null
null
null
false
1
false
daokang/bidai
2022-06-14T12:58:39.000Z
null
false
21dd0c2c4f2c1a53ee9c01dff2deb233d8899e17
[]
[ "license:other" ]
https://huggingface.co/datasets/daokang/bidai/resolve/main/README.md
--- license: other ---
null
null
@inproceedings{socher2013recursive, title={Recursive deep models for semantic compositionality over a sentiment treebank}, author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher}, booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing}, pages={1631--1642}, year={2013} }
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. We use the two-way (positive/negative) class split, and use only sentence-level labels.
false
14,370
false
sst2
2022-11-03T16:47:12.000Z
sst
false
201742d667ca3bc968dc8cc949d350522d9e77c0
[]
[ "annotations_creators:crowdsourced", "language_creators:found", "language:en", "license:unknown", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:text-classification", "task_ids:sentiment-classification" ]
https://huggingface.co/datasets/sst2/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: sst pretty_name: Stanford Sentiment Treebank v2 dataset_info: features: - name: idx dtype: int32 - name: sentence dtype: string - name: label dtype: class_label: names: 0: negative 1: positive splits: - name: test num_bytes: 216868 num_examples: 1821 - name: train num_bytes: 4690022 num_examples: 67349 - name: validation num_bytes: 106361 num_examples: 872 download_size: 7439277 dataset_size: 5013251 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://nlp.stanford.edu/sentiment/ - **Repository:** - **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://www.aclweb.org/anthology/D13-1170/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005) and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges. Binary classification experiments on full sentences (negative or somewhat negative vs somewhat positive or positive with neutral sentences discarded) refer to the dataset as SST-2 or SST binary. ### Supported Tasks and Leaderboards - `sentiment-classification` ### Languages The text in the dataset is in English (`en`). ## Dataset Structure ### Data Instances ``` {'idx': 0, 'sentence': 'hide new secretions from the parental units ', 'label': 0} ``` ### Data Fields - `idx`: Monotonically increasing index ID. - `sentence`: Complete sentence expressing an opinion about a film. - `label`: Sentiment of the opinion, either "negative" (0) or positive (1). ### Data Splits | | train | validation | test | |--------------------|---------:|-----------:|-----:| | Number of examples | 67349 | 872 | 1821 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Rotten Tomatoes reviewers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown. ### Citation Information ```bibtex @inproceedings{socher-etal-2013-recursive, title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", author = "Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D. and Ng, Andrew and Potts, Christopher", booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", month = oct, year = "2013", address = "Seattle, Washington, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D13-1170", pages = "1631--1642", } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
daokang
null
null
null
false
1
false
daokang/bs
2022-06-14T12:02:04.000Z
null
false
cc4cc17db9c6b565f63120f17004fa6b31d57d15
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/daokang/bs/resolve/main/README.md
--- license: afl-3.0 ---
jacklin
null
null
null
false
4
false
jacklin/msmarco_passage_ranking_queries
2022-06-13T21:46:15.000Z
null
false
07c4d89846054c20b3cf55b961ba1c2c31896562
[]
[ "arxiv:1611.09268" ]
https://huggingface.co/datasets/jacklin/msmarco_passage_ranking_queries/resolve/main/README.md
This is the preprocessed queries from msmarco passage(v1) ranking corpus. *[MS MARCO: A human generated MAchine Reading COmprehension dataset](https://arxiv.org/pdf/1611.09268.pdf)* SPayal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen,.
jacklin
null
null
null
false
84
false
jacklin/msmarco_passage_ranking_corpus
2022-06-13T21:45:41.000Z
null
false
9abcec93c78c145abb4646ac0bd6056f36556e61
[]
[ "arxiv:1611.09268" ]
https://huggingface.co/datasets/jacklin/msmarco_passage_ranking_corpus/resolve/main/README.md
This is the preprocessed data from msmarco passage(v1) ranking corpus. *[MS MARCO: A human generated MAchine Reading COmprehension dataset](https://arxiv.org/pdf/1611.09268.pdf)* SPayal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen,.
gsarti
null
@inproceedings{haagsma-etal-2020-magpie, title = "{MAGPIE}: A Large Corpus of Potentially Idiomatic Expressions", author = "Haagsma, Hessel and Bos, Johan and Nissim, Malvina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.35", pages = "279--287", language = "English", ISBN = "979-10-95546-34-4", } @inproceedings{dankers-etal-2022-transformer, title = "Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation", author = "Dankers, Verna and Lucas, Christopher and Titov, Ivan", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.252", doi = "10.18653/v1/2022.acl-long.252", pages = "3608--3626", }
The MAGPIE corpus is a large sense-annotated corpus of potentially idiomatic expressions (PIEs), based on the British National Corpus (BNC). Potentially idiomatic expressions are like idiomatic expressions, but the term also covers literal uses of idiomatic expressions, such as 'I leave work at the end of the day.' for the idiom 'at the end of the day'. This version of the dataset reflects the filtered subset used by Dankers et al. (2022) in their investigation on how PIEs are represented by NMT models. Authors use 37k samples annotated as fully figurative or literal, for 1482 idioms that contain nouns, numerals or adjectives that are colours (which they refer to as keywords). Because idioms show syntactic and morphological variability, the focus is mostly put on nouns. PIEs and their context are separated using the original corpus’s word-level annotations.
false
6
false
gsarti/magpie
2022-10-27T08:37:46.000Z
null
false
fa6ae9d93b03e6403e82696496dfbd2cf5c3d3d5
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:text-classification", "task_categories:text2text-generation", "task_categorie...
https://huggingface.co/datasets/gsarti/magpie/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification - text2text-generation - translation task_ids: [] pretty_name: magpie tags: - idiomaticity-classification --- # Dataset Card for MAGPIE ## Table of Contents - [Dataset Card for MAGPIE](#dataset-card-for-itacola) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Original Repository:** [hslh/magpie-corpus](https://github.com/hslh/magpie-corpus) - **Other Repository:** [vernadankers/mt_idioms](https://github.com/vernadankers/mt_idioms) - **Original Paper:** [ACL Anthology](https://aclanthology.org/2020.lrec-1.35/) - **Other Paper:** [ACL Anthology](https://aclanthology.org/2022.acl-long.252/) - **Point of Contact:** [Hessel Haagsma, Verna Dankers](vernadankers@gmail.com) ### Dataset Summary The MAGPIE corpus ([Haagsma et al. 2020](https://aclanthology.org/2020.lrec-1.35/)) is a large sense-annotated corpus of potentially idiomatic expressions (PIEs), based on the British National Corpus (BNC). Potentially idiomatic expressions are like idiomatic expressions, but the term also covers literal uses of idiomatic expressions, such as 'I leave work at the end of the day.' for the idiom 'at the end of the day'. This version of the dataset reflects the filtered subset used by [Dankers et al. (2022)](https://aclanthology.org/2022.acl-long.252/) in their investigation on how PIEs are represented by NMT models. Authors use 37k samples annotated as fully figurative or literal, for 1482 idioms that contain nouns, numerals or adjectives that are colors (which they refer to as keywords). Because idioms show syntactic and morphological variability, the focus is mostly put on nouns. PIEs and their context are separated using the original corpus’s word-level annotations. ### Languages The language data in MAGPIE is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances The `magpie` configuration contains sentences with annotations for the presence, usage an type of potentially idiomatic expressions. An example from the `train` split of the `magpie` config (default) is provided below. ```json { 'sentence': 'There seems to be a dearth of good small tools across the board.', 'annotation': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1], 'idiom': 'across the board', 'usage': 'figurative', 'variant': 'identical', 'pos_tags': ['ADV', 'VERB', 'PART', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN'] } ``` The text is provided as-is, without further preprocessing or tokenization. The fields are the following: - `sentence`: The sentence containing a PIE. - `annotation`: List of 0s and 1s of the same length of the whitespace-tokenized sentence, with 1s corresponding to the position of the idiomatic expression. - `idiom`: The idiom contained in the sentence in its base form. - `usage`: Either `figurative` or `literal`, depending on the usage of the PIE. - `variant`: `identical` if the PIE matches the base form of the idiom, otherwise specifies the variation. - `pos_tags`: List of POS tags associated with words in the sentence. ### Data Splits | config| train| |----------:|-----:| |`magpie` | 44451 | ### Dataset Creation Please refer to the original article [MAGPIE: A Large Corpus of Potentially Idiomatic Expressions](https://aclanthology.org/2020.lrec-1.35) for additional information on dataset creation, and to the article [Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation](https://aclanthology.org/2022.acl-long.252) for further information on the filtering of selected idioms. ## Additional Information ### Dataset Curators The original authors are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com). ### Licensing Information The dataset is licensed under [Creative Commons 4.0 license (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/) ### Citation Information Please cite the authors if you use this corpus in your work: ```bibtex @inproceedings{haagsma-etal-2020-magpie, title = "{MAGPIE}: A Large Corpus of Potentially Idiomatic Expressions", author = "Haagsma, Hessel and Bos, Johan and Nissim, Malvina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.35", pages = "279--287", language = "English", ISBN = "979-10-95546-34-4", } @inproceedings{dankers-etal-2022-transformer, title = "Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation", author = "Dankers, Verna and Lucas, Christopher and Titov, Ivan", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.252", doi = "10.18653/v1/2022.acl-long.252", pages = "3608--3626", } ```
PiC
null
@article{pham2022PiC, title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search}, author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh}, journal={arXiv preprint arXiv:2207.09068}, year={2022} }
Phrase in Context is a curated benchmark for phrase understanding and semantic search, consisting of three tasks of increasing difficulty: Phrase Similarity (PS), Phrase Retrieval (PR) and Phrase Sense Disambiguation (PSD). The datasets are annotated by 13 linguistic experts on Upwork and verified by two groups: ~1000 AMT crowdworkers and another set of 5 linguistic experts. PiC benchmark is distributed under CC-BY-NC 4.0.
false
95
false
PiC/phrase_retrieval
2022-08-26T21:23:03.000Z
phrase-in-context
false
c2968b339697badc965b6ae1fb974c38dc094ec5
[]
[ "annotations_creators:expert-generated", "language_creators:found", "language_creators:expert-generated", "language:en", "license:cc-by-nc-4.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:text-retrieval" ]
https://huggingface.co/datasets/PiC/phrase_retrieval/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - found - expert-generated language: - en license: - cc-by-nc-4.0 multilinguality: - monolingual paperswithcode_id: phrase-in-context pretty_name: 'PiC: Phrase Retrieval' size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-retrieval task_ids: [] --- # Dataset Card for "PiC: Phrase Retrieval" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://phrase-in-context.github.io/](https://phrase-in-context.github.io/) - **Repository:** [https://github.com/phrase-in-context](https://github.com/phrase-in-context) - **Paper:** - **Leaderboard:** - **Point of Contact:** [Thang Pham](<thangpham@auburn.edu>) ### Dataset Summary PR is a phrase retrieval task with the goal of finding a phrase **t** in a given document **d** such that **t** is semantically similar to the query phrase, which is the paraphrase **q**<sub>1</sub> provided by annotators. We release two versions of PR: **PR-pass** and **PR-page**, i.e., datasets of 3-tuples (query **q**<sub>1</sub>, target phrase **t**, document **d**) where **d** is a random 11-sentence passage that contains **t** or an entire Wikipedia page. While PR-pass contains 28,147 examples, PR-page contains slightly fewer examples (28,098) as we remove those trivial examples whose Wikipedia pages contain exactly the query phrase (in addition to the target phrase). Both datasets are split into 5K/3K/~20K for test/dev/train, respectively. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English. ## Dataset Structure ### Data Instances **PR-pass** * Size of downloaded dataset files: 43.61 MB * Size of the generated dataset: 36.98 MB * Total amount of disk used: 80.59 MB An example of 'train' looks as follows. ``` { "id": "3478-1", "title": "https://en.wikipedia.org/wiki?curid=181261", "context": "The 425t was a 'pizza box' design with a single network expansion slot. The 433s was a desk-side server systems with multiple expansion slots. Compatibility. PC compatibility was possible either through software emulation, using the optional product DPCE, or through a plug-in card carrying an Intel 80286 processor. A third-party plug-in card with a 386 was also available. An Apollo Token Ring network card could also be placed in a standard PC and network drivers allowed it to connect to a server running a PC SMB (Server Message Block) file server. Usage. Although Apollo systems were easy to use and administer, they became less cost-effective because the proprietary operating system made software more expensive than Unix software. The 68K processors were slower than the new RISC chips from Sun and Hewlett-Packard. Apollo addressed both problems by introducing the RISC-based DN10000 and Unix-friendly Domain/OS operating system. However, the DN10000, though fast, was extremely expensive, and a reliable version of Domain/OS came too late to make a difference.", "query": "dependable adaptation", "answers": { "text": ["reliable version"], "answer_start": [1006] } } ``` **PR-page** * Size of downloaded dataset files: 421.56 MB * Size of the generated dataset: 412.17 MB * Total amount of disk used: 833.73 MB An example of 'train' looks as follows. ``` { "id": "5961-2", "title": "https://en.wikipedia.org/wiki?curid=354711", "context": "Joseph Locke FRSA (9 August 1805 – 18 September 1860) was a notable English civil engineer of the nineteenth century, particularly associated with railway projects. Locke ranked alongside Robert Stephenson and Isambard Kingdom Brunel as one of the major pioneers of railway development. Early life and career. Locke was born in Attercliffe, Sheffield in Yorkshire, moving to nearby Barnsley when he was five. By the age of 17, Joseph had already served an apprenticeship under William Stobart at Pelaw, on the south bank of the Tyne, and under his own father, William. He was an experienced mining engineer, able to survey, sink shafts, to construct railways, tunnels and stationary engines. Joseph's father had been a manager at Wallbottle colliery on Tyneside when George Stephenson was a fireman there. In 1823, when Joseph was 17, Stephenson was involved with planning the Stockton and Darlington Railway. He and his son Robert Stephenson visited William Locke and his son at Barnsley and it was arranged that Joseph would go to work for the Stephensons. The Stephensons established a locomotive works near Forth Street, Newcastle upon Tyne, to manufacture locomotives for the new railway. Joseph Locke, despite his youth, soon established a position of authority. He and Robert Stephenson became close friends, but their friendship was interrupted, in 1824, by Robert leaving to work in Colombia for three years. Liverpool and Manchester Railway. George Stephenson carried out the original survey of the line of the Liverpool and Manchester Railway, but this was found to be flawed, and the line was re-surveyed by a talented young engineer, Charles Vignoles. Joseph Locke was asked by the directors to carry out another survey of the proposed tunnel works and produce a report. The report was highly critical of the work already done, which reflected badly on Stephenson. Stephenson was furious and henceforth relations between the two men were strained, although Locke continued to be employed by Stephenson, probably because the latter recognised his worth. Despite the many criticisms of Stephenson's work, when the bill for the new line was finally passed, in 1826, Stephenson was appointed as engineer and he appointed Joseph Locke as his assistant to work alongside Vignoles, who was the other assistant. However, a clash of personalities between Stephenson and Vignoles led to the latter resigning, leaving Locke as the sole assistant engineer. Locke took over responsibility for the western half of the line. One of the major obstacles to be overcome was Chat Moss, a large bog that had to be crossed. Although, Stephenson usually gets the credit for this feat, it is believed that it was Locke who suggested the correct method for crossing the bog. Whilst the line was being built, the directors were trying to decide whether to use standing engines or locomotives to propel the trains. Robert Stephenson and Joseph Locke were convinced that locomotives were vastly superior, and in March 1829 the two men wrote a report demonstrating the superiority of locomotives when used on a busy railway. The report led to the decision by the directors to hold an open trial to find the best locomotive. This was the Rainhill Trials, which were run in October 1829, and were won by \"Rocket\". When the line was finally opened in 1830, it was planned for a procession of eight trains to travel from Liverpool to Manchester and back. George Stephenson drove the leading locomotive \"Northumbrian\" and Joseph Locke drove \"Rocket\". The day was marred by the death of William Huskisson, the Member of Parliament for Liverpool, who was struck and killed by \"Rocket\". Grand Junction Railway. In 1829 Locke was George Stephenson's assistant, given the job of surveying the route for the Grand Junction Railway. This new railway was to join Newton-le-Willows on the Liverpool and Manchester Railway with Warrington and then on to Birmingham via Crewe, Stafford and Wolverhampton, a total of 80 miles. Locke is credited with choosing the location for Crewe and recommending the establishment there of shops required for the building and repairs of carriages and wagons as well as engines. During the construction of the Liverpool and Manchester Railway, Stephenson had shown a lack of ability in organising major civil engineering projects. On the other hand, Locke's ability to manage complex projects was well known. The directors of the new railway decided on a compromise whereby Locke was made responsible for the northern half of the line and Stephenson was made responsible for the southern half. However Stephenson's administrative inefficiency soon became apparent, whereas Locke estimated the costs for his section of the line so meticulously and speedily, that he had all of the contracts signed for his section of the line before a single one had been signed for Stephenson's section. The railway company lost patience with Stephenson, but tried to compromise by making both men joint-engineers. Stephenson's pride would not let him accept this, and so he resigned from the project. By autumn of 1835 Locke had become chief engineer for the whole of the line. This caused a rift between the two men, and strained relations between Locke and Robert Stephenson. Up to this point, Locke had always been under George Stephenson's shadow. From then on, he would be his own man, and stand or fall by his own achievements. The line was opened on 4 July 1837. New methods. Locke's route avoided as far as possible major civil engineering works. The main one was the Dutton Viaduct which crosses the River Weaver and the Weaver Navigation between the villages of Dutton and Acton Bridge in Cheshire. The viaduct consists of 20 arches with spans of 20 yards. An important feature of the new railway was the use of double-headed (dumb-bell) wrought-iron rail supported on timber sleepers at 2 ft 6 in intervals. It was intended that when the rails became worn they could be turned over to use the other surface, but in practice it was found that the chairs into which the rails were keyed caused wear to the bottom surface so that it became uneven. However this was still an improvement on the fish-bellied, wrought-iron rails still being used by Robert Stephenson on the London and Birmingham Railway. Locke was more careful than Stephenson to get value for his employers' money. For the Penkridge Viaduct Stephenson had obtained a tender of £26,000. After Locke took over, he gave the potential contractor better information and agreed a price of only £6,000. Locke also tried to avoid tunnels because in those days tunnels often took longer and cost more than planned. The Stephensons regarded 1 in 330 as the maximum slope that an engine could manage and Robert Stephenson achieved this on the London and Birmingham Railway by using seven tunnels which added both cost and delay. Locke avoided tunnels almost completely on the Grand Junction but exceeded the slope limit for six miles south of Crewe. Proof of Locke's ability to estimate costs accurately is given by the fact that the construction of the Grand Junction line cost £18,846 per mile as against Locke's estimate of £17,000. This is amazingly accurate compared with the estimated costs for the London and Birmingham Railway (Robert Stephenson) and the Great Western Railway (Brunel). Locke also divided the project into a few large sections rather than many small ones. This allowed him to work closely with his contractors to develop the best methods, overcome problems and personally gain practical experience of the building process and of the contractors themselves. He used the contractors who worked well with him, especially Thomas Brassey and William Mackenzie, on many other projects. Everyone gained from this cooperative approach whereas Brunel's more adversarial approach eventually made it hard for him to get anyone to work for him. Marriage. In 1834 Locke married Phoebe McCreery, with whom he adopted a child. He was elected to the Royal Society in 1838. Lancaster and Carlisle Railway. A significant difference in philosophy between George Stephenson and Joseph Locke and the surveying methods they employed was more than a mere difference of opinion. Stephenson had started his career at a time when locomotives had little power to overcome excessive gradients. Both George and Robert Stephenson were prepared to go to great lengths to avoid steep gradients that would tax the locomotives of the day, even if this meant choosing a circuitous path that added on extra miles to the line of the route. Locke had more confidence in the ability of modern locomotives to climb these gradients. An example of this was the Lancaster and Carlisle Railway, which had to cope with the barrier of the Lake District mountains. In 1839 Stephenson proposed a circuitous route that avoided the Lake District altogether by going all the way round Morecambe Bay and West Cumberland, claiming: 'This is the only practicable line from Liverpool to Carlisle. The making of a railway across Shap Fell is out of the question.' The directors rejected his route and chose the one proposed by Joseph Locke, one that used steep gradients and passed over Shap Fell. The line was completed by Locke and was a success. Locke's reasoned that by avoiding long routes and tunnelling, the line could be finished more quickly, with less capital costs, and could start earning revenue sooner. This became known as the 'up and over' school of engineering (referred to by Rolt as 'Up and Down,' or Rollercoaster). Locke took a similar approach in planning the Caledonian Railway, from Carlisle to Glasgow. In both railways he introduced gradients of 1 in 75, which severely taxed fully laden locomotives, for even as more powerful locomotives were introduced, the trains that they pulled became heavier. It may therefore be argued that Locke, although his philosophy carried the day, was not entirely correct in his reasoning. Even today, Shap Fell is a severe test of any locomotive. Manchester and Sheffield Railway. Locke was subsequently appointed to build a railway line from Manchester to Sheffield, replacing Charles Vignoles as chief engineer, after the latter had been beset by misfortunes and financial difficulties. The project included the three-mile Woodhead Tunnel, and the line opened, after many delays, on 23 December 1845. The building of the line required over a thousand navvies and cost the lives of thirty-two of them, seriously injuring 140 others. The Woodhead Tunnel was such a difficult undertaking that George Stephenson claimed that it could not be done, declaring that he would eat the first locomotive that got through the tunnel. Subsequent commissions. In the north, Locke also designed the Lancaster and Preston Junction Railway; the Glasgow, Paisley and Greenock Railway; and the Caledonian Railway from Carlisle to Glasgow and Edinburgh. In the south, he worked on the London and Southampton Railway, later called the London and South Western Railway, designing, among other structures, Nine Elms to Waterloo Viaduct, Richmond Railway Bridge (1848, since replaced), and Barnes Bridge (1849), both across the River Thames, tunnels at Micheldever, and the 12-arch Quay Street viaduct and the 16-arch Cams Hill viaduct, both in Fareham (1848). He was actively involved in planning and building many railways in Europe (assisted by John Milroy), including the Le Havre, Rouen, Paris rail link, the Barcelona to Mataró line and the Dutch Rhenish Railway. He was present in Paris when the Versailles train crash occurred in 1842, and produced a statement concerning the facts for General Charles Pasley of the Railway Inspectorate. He also experienced a catastrophic failure of one of his viaducts built on the new Paris-Le Havre link. . The viaduct was of stone and brick at Barentin near Rouen, and was the longest and highest on the line. It was 108 feet high, and consisted of 27 arches, each 50 feet wide, with a total length of over 1600 feet. A boy hauling ballast for the line up an adjoining hillside early that morning (about 6.00 am) saw one arch (the fifth on the Rouen side) collapse, and the rest followed suit. Fortunately, no one was killed, although several workmen were injured in a mill below the structure. Locke attributed the catastrophic failure to frost action on the new lime cement, and premature off-centre loading of the viaduct with ballast. It was rebuilt at Thomas Brassey's cost, and survives to the present. Having pioneered many new lines in France, Locke also helped establish the first locomotive works in the country. Distinctive features of Locke's railway works were economy, the use of masonry bridges wherever possible and the absence of tunnels. An illustration of this is that there is no tunnel between Birmingham and Glasgow. Relationship with Robert Stephenson. Locke and Robert Stephenson had been good friends at the beginning of their careers, but their friendship had been marred by Locke's falling out with Robert's father. It seems that Robert felt loyalty to his father required that he should take his side. It is significant that after the death of George Stephenson in August 1848, the friendship of the two men was revived. When Robert Stephenson died in October 1859, Joseph Locke was a pallbearer at his funeral. Locke is reported to have referred to Robert as 'the friend of my youth, the companion of my ripening years, and a competitor in the race of life'. Locke was also on friendly terms with his other engineering rival, Isambard Kingdom Brunel. In 1845, Locke and Stephenson were both called to give evidence before two committees. In April a House of Commons Select Committee was investigating the atmospheric railway system proposed by Brunel. Brunel and Vignoles spoke in support of the system, whilst Locke and Stephenson spoke against it. The latter two were to be proved right in the long run. In August the two gave evidence before the Gauge Commissioners who were trying to arrive at a standard gauge for the whole country. Brunel spoke in favour of the 7 ft gauge he was using on the Great Western Railway. Locke and Stephenson spoke in favour of the 4 ft 8½in gauge that they had used on several lines. The latter two won the day and their gauge was adopted as the standard. Later life and legacy. Locke served as President of the Institution of Civil Engineers in between December 1857 and December 1859. He also served as Member of Parliament for Honiton in Devon from 1847 until his death. Joseph Locke died on 18 September 1860, apparently from appendicitis, whilst on a shooting holiday. He is buried in London's Kensal Green Cemetery. He outlived his friends/rivals Robert Stephenson and Isambard Brunel by less than a year; all three engineers died between 53 and 56 years of age, a circumstance attributed by Rolt to sheer overwork, accomplishing more in their brief lives than many achieve in a full three score and ten. Locke Park in Barnsley was dedicated to his memory by his widow Phoebe in 1862. It features a statue of Locke plus a folly, 'Locke Tower'. Locke's greatest legacy is the modern day West Coast Main Line (WCML), which was formed by the joining of the Caledonian, Lancaster &amp; Carlisle, Grand Junction railways to Robert Stephenson's London &amp; Birmingham Railway. As a result, around three-quarters of the WCML's route was planned and engineered by Locke.", "query": "accurate approach", "answers": { "text": ["correct method"], "answer_start": [2727] } } ``` ### Data Fields The data fields are the same among all subsets and splits. * id: a string feature. * title: a string feature. * context: a string feature. * question: a string feature. * answers: a dictionary feature containing: * text: a list of string features. * answer_start: a list of int32 features. ### Data Splits | name |train|validation|test| |--------------------|----:|---------:|---:| |PR-pass |20147| 3000|5000| |PR-page |20098| 3000|5000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The source passages + answers are from Wikipedia and the source of queries were produced by our hired linguistic experts from [Upwork.com](https://upwork.com). #### Who are the source language producers? We hired 13 linguistic experts from [Upwork.com](https://upwork.com) for annotation and more than 1000 human annotators on Mechanical Turk along with another set of 5 Upwork experts for 2-round verification. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? 13 linguistic experts from [Upwork.com](https://upwork.com). ### Personal and Sensitive Information No annotator identifying details are provided. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset is a joint work between Adobe Research and Auburn University. Creators: [Thang M. Pham](https://scholar.google.com/citations?user=eNrX3mYAAAAJ), [David Seunghyun Yoon](https://david-yoon.github.io/), [Trung Bui](https://sites.google.com/site/trungbuistanford/), and [Anh Nguyen](https://anhnguyen.me). [@PMThangXAI](https://twitter.com/pmthangxai) added this dataset to HuggingFace. ### Licensing Information This dataset is distributed under [Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) ### Citation Information ``` @article{pham2022PiC, title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search}, author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh}, journal={arXiv preprint arXiv:2207.09068}, year={2022} } ```
rajistics
null
null
null
false
2
false
rajistics/auditor_review
2022-07-19T21:48:59.000Z
null
false
07aee4679428bb3a0d132f5a3863c0b00b9804fd
[]
[ "annotations_creators:expert-generated", "language_creators:found", "language:en", "license:cc-by-nc-sa-3.0", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:sentiment-cla...
https://huggingface.co/datasets/rajistics/auditor_review/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-nc-sa-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification - sentiment-classification paperswithcode_id: null pretty_name: Auditor_Review --- # Dataset Card for financial_phrasebank ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description Auditor review data collected by News Department - **Point of Contact:** Talked to COE for Auditing ### Dataset Summary Auditor sentiment dataset of sentences from financial news. The dataset consists of *** sentences from English language financial news categorized by sentiment. The dataset is divided by agreement rate of 5-8 annotators. ### Supported Tasks and Leaderboards Sentiment Classification ### Languages English ## Dataset Structure ### Data Instances ``` { "sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .", "label": "negative" } ``` ### Data Fields - sentence: a tokenized line from the dataset - label: a label corresponding to the class as a string: 'positive', 'negative' or 'neutral' ### Data Splits A test train split was created randomly with a 75/25 split ## Dataset Creation ### Curation Rationale The key arguments for the low utilization of statistical techniques in financial sentiment analysis have been the difficulty of implementation for practical applications and the lack of high quality training data for building such models. *** ### Source Data #### Initial Data Collection and Normalization The corpus used in this paper is made out of English news on all listed companies in **** #### Who are the source language producers? The source data was written by various auditors ### Annotations #### Annotation process This release of the financial phrase bank covers a collection of 4840 sentences. The selected collection of phrases was annotated by 16 people with adequate background knowledge on financial markets. Given the large number of overlapping annotations (5 to 8 annotations per sentence), there are several ways to define a majority vote based gold standard. To provide an objective comparison, we have formed 4 alternative reference datasets based on the strength of majority agreement: ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases All annotators were from the same institution and so interannotator agreement should be understood with this taken into account. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information License: Creative Commons Attribution 4.0 International License (CC-BY) ### Contributions
Jerimee
null
null
null
false
1
false
Jerimee/autotrain-data-dontknowwhatImdoing
2022-10-25T10:32:19.000Z
null
false
d7d9b95354c161647de519d4e8d9a59a801570b3
[]
[ "language:en", "task_categories:text-classification" ]
https://huggingface.co/datasets/Jerimee/autotrain-data-dontknowwhatImdoing/resolve/main/README.md
--- language: - en task_categories: - text-classification --- # AutoTrain Dataset for project: dontknowwhatImdoing ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project dontknowwhatImdoing. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "Gaston", "target": 1 }, { "text": "Churchundyr", "target": 0 } ] ``` Note that, sadly, it flipped the boolean, using 1 for mundane and 0 for goblin. ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=2, names=['Goblin', 'Mundane'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 965 | | valid | 242 |
PiC
null
@article{pham2022PiC, title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search}, author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh}, journal={arXiv preprint arXiv:2207.09068}, year={2022} }
Phrase in Context is a curated benchmark for phrase understanding and semantic search, consisting of three tasks of increasing difficulty: Phrase Similarity (PS), Phrase Retrieval (PR) and Phrase Sense Disambiguation (PSD). The datasets are annotated by 13 linguistic experts on Upwork and verified by two groups: ~1000 AMT crowdworkers and another set of 5 linguistic experts. PiC benchmark is distributed under CC-BY-NC 4.0.
false
520
false
PiC/phrase_sense_disambiguation
2022-10-20T19:42:35.000Z
phrase-in-context
false
ce5f1c5b6afee554a1389e022370af42eb3b7e2e
[]
[ "annotations_creators:expert-generated", "language_creators:found", "language_creators:expert-generated", "language:en", "license:cc-by-nc-4.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:text-retrieval" ]
https://huggingface.co/datasets/PiC/phrase_sense_disambiguation/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - found - expert-generated language: - en license: - cc-by-nc-4.0 multilinguality: - monolingual paperswithcode_id: phrase-in-context pretty_name: 'PiC: Phrase Sense Disambiguation' size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-retrieval task_ids: [] --- # Dataset Card for "PiC: Phrase Sense Disambiguation" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://phrase-in-context.github.io/](https://phrase-in-context.github.io/) - **Repository:** [https://github.com/phrase-in-context](https://github.com/phrase-in-context) - **Paper:** - **Leaderboard:** - **Point of Contact:** [Thang Pham](<thangpham@auburn.edu>) - **Size of downloaded dataset files:** 49.95 MB - **Size of the generated dataset:** 43.26 MB - **Total amount of disk used:** 93.20 MB ### Dataset Summary PSD is a phrase retrieval task like PR-pass and PR-page but more challenging since each example contains two short paragraphs (~11 sentences each) which trigger different senses of the same phrase. The goal is to find the instance of the target phrase **t** that is semantically similar to a paraphrase **q**. The dataset is split into 5,150/3,000/20,002 for test/dev/train, respectively. <p align="center"> <img src="https://auburn.edu/~tmp0038/PiC/psd_sample.png" alt="PSD sample" style="width:100%; border:0;"> </p> Given document D, trained Longformer-large model correctly retrieves <span style="background-color: #ef8783">massive figure</span> in the second paragraph for the query Q<sub>2</sub> "giant number" but **fails** to retrieve the answer when the query Q<sub>1</sub> is "huge model". The correct answer for Q<sub>1</sub> should be <span style="background-color: #a1fb8e">massive figure</span> in the first passage since this phrase relates to a model rather than a number. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English. ## Dataset Structure ### Data Instances **PSD** * Size of downloaded dataset files: 49.95 MB * Size of the generated dataset: 43.26 MB * Total amount of disk used: 93.20 MB An example of 'test' looks as follows. ``` { "id": "297-1", "title": "https://en.wikipedia.org/wiki?curid=2226019,https://en.wikipedia.org/wiki?curid=1191780", "context": "In addition, the results from the study did not support the idea of females preferring complexity over simplicity in song sequences. These findings differ from past examinations, like the 2008 Morisake et al. study that suggested evidence of female Bengalese finches preferring complex songs over simple ones. Evolutionary adaptations of specifically complex song production in relation to female preference in Bengalese finches continues to be a topic worth examining. Comparison with zebra finches. Bengalese finches and zebra finches are members of the estrildiae family and are age-limited learners when it comes to song learning and the acoustic characteristics of their songs (Peng et al., 2012). Both of these species have been widely used in song learning based animal behavior research and although they share many characteristics researchers have been able to determine stark differences between the two. Previous to research done in 1987, it was thought that song learning in Bengalese finches was similar to zebra finches but there was no research to support this idea. Both species require learning from an adult during a sensitive juvenile phase in order to learn the species specific and sexually dimorphic songs. This tutor can be the father of the young or other adult males that are present around the juvenile. Clayton aimed to directly compare the song learning ability of both of these species to determine if they have separate learning behaviors. Many students find they can not possibly complete all the work assigned them; they learn to neglect some of it. Some student groups maintain files of past examinations which only worsen this situation. The difference between the formal and real requirements produced considerable dissonance among the students and resulted in cynicism, scorn, and hypocrisy among students, and particular difficulty for minority students. No part of the university community, writes Snyder, neither the professors, the administration nor the students, desires the end result created by this process. The \"Saturday Review\" said the book \"will gain recognition as one of the more cogent 'college unrest' books\" and that it presents a \"most provocative thesis.\" The book has been cited many times in studies. References. [[Category:Curricula]] [[Category:Philosophy of education]] [[Category:Massachusetts Institute of Technology]] [[Category:Books about social psychology]] [[Category:Student culture]] [[Category:Books about education]] [[Category:1970 non-fiction books]]", "query": "previous exams", "answers": { "text": ["past examinations"], "answer_start": [1621] } } ``` ### Data Fields The data fields are the same among all subsets and splits. * id: a string feature. * title: a string feature. * context: a string feature. * question: a string feature. * answers: a dictionary feature containing: * text: a list of string features. * answer_start: a list of int32 features. ### Data Splits | name |train|validation|test| |--------------------|----:|---------:|---:| |PSD |20002| 3000|5000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The source passages + answers are from Wikipedia and the source of queries were produced by our hired linguistic experts from [Upwork.com](https://upwork.com). #### Who are the source language producers? We hired 13 linguistic experts from [Upwork.com](https://upwork.com) for annotation and more than 1000 human annotators on Mechanical Turk along with another set of 5 Upwork experts for 2-round verification. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? 13 linguistic experts from [Upwork.com](https://upwork.com). ### Personal and Sensitive Information No annotator identifying details are provided. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset is a joint work between Adobe Research and Auburn University. Creators: [Thang M. Pham](https://scholar.google.com/citations?user=eNrX3mYAAAAJ), [David Seunghyun Yoon](https://david-yoon.github.io/), [Trung Bui](https://sites.google.com/site/trungbuistanford/), and [Anh Nguyen](https://anhnguyen.me). [@PMThangXAI](https://twitter.com/pmthangxai) added this dataset to HuggingFace. ### Licensing Information This dataset is distributed under [Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) ### Citation Information ``` @article{pham2022PiC, title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search}, author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh}, journal={arXiv preprint arXiv:2207.09068}, year={2022} } ```
demo-org
null
null
null
false
435
false
demo-org/auditor_review
2022-08-30T21:42:09.000Z
null
false
22e9451042c750f5dec39e243d34f4efea1f3cda
[]
[ "annotations_creators:expert-generated", "language_creators:found", "language:en", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:sentiment-classification" ]
https://huggingface.co/datasets/demo-org/auditor_review/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - found language: - en multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification - sentiment-classification paperswithcode_id: null pretty_name: Auditor_Review --- # Dataset Card for Auditor_Review ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) ## Dataset Description Auditor review data collected by News Department - **Point of Contact:** Talked to COE for Auditing, currently sue@demo.org ### Dataset Summary Auditor sentiment dataset of sentences from financial news. The dataset consists of 3500 sentences from English language financial news categorized by sentiment. The dataset is divided by the agreement rate of 5-8 annotators. ### Supported Tasks and Leaderboards Sentiment Classification ### Languages English ## Dataset Structure ### Data Instances ``` "sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .", "label": "negative" ``` ### Data Fields - sentence: a tokenized line from the dataset - label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0) Complete data code is [available here](https://www.datafiles.samhsa.gov/get-help/codebooks/what-codebook) ### Data Splits A train/test split was created randomly with a 75/25 split ## Dataset Creation ### Curation Rationale To gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance. ### Source Data #### Initial Data Collection and Normalization The corpus used in this paper is made out of English news reports. #### Who are the source language producers? The source data was written by various auditors. ### Annotations #### Annotation process This release of the auditor reviews covers a collection of 4840 sentences. The selected collection of phrases was annotated by 16 people with adequate background knowledge of financial markets. The subset here is where inter-annotation agreement was greater than 75%. #### Who are the annotators? They were pulled from the SME list, names are held by sue@demo.org ### Personal and Sensitive Information There is no personal or sensitive information in this dataset. ## Considerations for Using the Data ### Discussion of Biases All annotators were from the same institution and so interannotator agreement should be understood with this taken into account. The [Dataset Measurement tool](https://huggingface.co/spaces/huggingface/data-measurements-tool) identified these bias statistics: ![Bias](https://huggingface.co/datasets/demo-org/auditor_review/resolve/main/bias_stats.png) ### Other Known Limitations [More Information Needed] ### Licensing Information License: Demo.Org Proprietary - DO NOT SHARE
tensorcat
null
null
null
false
1
false
tensorcat/Danbooru-2020-Small
2022-06-14T03:41:59.000Z
null
false
8dd56bd02deccc9252f356e164a48c6adafa77d4
[]
[]
https://huggingface.co/datasets/tensorcat/Danbooru-2020-Small/resolve/main/README.md
# Danbooru2020 Small 60GB sample dataset Aggregating the kaggle dataset here but keeping their hosting for the raw files Links: > https://www.kaggle.com/datasets/muoncollider/danbooru2020/download See the notebook file for a quick reference on how to extract info
isa93
null
null
null
false
1
false
isa93/mio
2022-06-14T05:33:37.000Z
null
false
d261c35f195f1d397925fffad88cf08ca8f21e9b
[]
[ "license:wtfpl" ]
https://huggingface.co/datasets/isa93/mio/resolve/main/README.md
--- license: wtfpl ---
hydramst
null
null
null
false
2
false
hydramst/face_mask_wearing
2022-06-14T09:15:57.000Z
null
false
c2cc1eb192d1cbba04bfee929b089ad96720455e
[]
[ "license:other" ]
https://huggingface.co/datasets/hydramst/face_mask_wearing/resolve/main/README.md
--- license: other --- # Description The dataset represents huge number of images of people wearing face masks or not to be used extensively for train/test splitting. Selected files were double-checked to avoid data collection bias using common sense. # Sources The dataset obtained and combined from various open data sources, including following: - https://www.kaggle.com/frabbisw/facial-age - https://www.kaggle.com/nipunarora8/age-gender-and-ethnicity-face-data-csv - https://www.kaggle.com/arashnic/faces-age-detection-dataset - https://www.kaggle.com/andrewmvd/face-mask-detection - manually obtained under-represented observations using Google search engine # Structure The dataset is curated and structured into three age groups (under 18, 18-65 and 65+) without initial test/train selection, which is achieved programmatically to allow manipulations with original data. <a href="https://postimages.org/" target="_blank"><img src="https://i.postimg.cc/cCyDskHz/2022-06-14-10-21-39.webp" alt="2022-06-14-10-21-39"/></a> <a href="https://postimages.org/" target="_blank"><img src="https://i.postimg.cc/zvCx3wHG/Screenshot-2022-06-14-101707.png" alt="Screenshot-2022-06-14-101707"/></a>
daokang
null
null
null
false
6
false
daokang/bisai
2022-06-14T13:21:59.000Z
null
false
4b0ce279a6d0b05dfd10aa0f765bbdf77a2fd528
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/daokang/bisai/resolve/main/README.md
--- license: afl-3.0 ---
taskydata
null
null
null
false
1
false
taskydata/tasky_or_not
2022-07-02T14:03:47.000Z
null
false
4c65051b989006eeb5bdf800a24bdfaf61e7a38b
[]
[]
https://huggingface.co/datasets/taskydata/tasky_or_not/resolve/main/README.md
[Needs More Information] # Dataset Card for tasky_or_not ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary [Needs More Information] ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
null
null
@inproceedings{wang2020chinese, title={A Large-Scale Chinese Short-Text Conversation Dataset}, author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie}, booktitle={NLPCC}, year={2020}, url={https://arxiv.org/abs/2008.03946} }
LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations. A rigorous data cleaning pipeline is designed to ensure the quality of the corpus. This pipeline involves a set of rules and several classifier-based filters. Noises such as offensive or sensitive words, special symbols, emojis, grammatically incorrect sentences, and incoherent conversations are filtered.
false
154
false
lccc
2022-11-03T16:07:43.000Z
lccc
false
663c420f81c529c5ab8bc7d671ab7279941ff44d
[]
[ "arxiv:2008.03946", "annotations_creators:other", "language_creators:other", "language:zh", "license:mit", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "task_categories:conversational", "task_ids:dialogue-generation" ]
https://huggingface.co/datasets/lccc/resolve/main/README.md
--- annotations_creators: - other language_creators: - other language: - zh license: - mit multilinguality: - monolingual paperswithcode_id: lccc pretty_name: 'LCCC: Large-scale Cleaned Chinese Conversation corpus' size_categories: - 10M<n<100M source_datasets: - original task_categories: - conversational task_ids: - dialogue-generation dataset_info: - config_name: large features: - name: dialog list: string splits: - name: train num_bytes: 1530827965 num_examples: 12007759 download_size: 607605643 dataset_size: 1530827965 - config_name: base features: - name: dialog list: string splits: - name: test num_bytes: 1498216 num_examples: 10000 - name: train num_bytes: 932634902 num_examples: 6820506 - name: validation num_bytes: 2922731 num_examples: 20000 download_size: 371475095 dataset_size: 937055849 --- # Dataset Card for LCCC ## Table of Contents - [Dataset Card for LCCC](#dataset-card-for-lccc) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/thu-coai/CDial-GPT - **Paper:** https://arxiv.org/abs/2008.03946 ### Dataset Summary LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large Chinese dialogue corpus originate from Chinese social medias. A rigorous data cleaning pipeline is designed to ensure the quality of the corpus. This pipeline involves a set of rules and several classifier-based filters. Noises such as offensive or sensitive words, special symbols, emojis, grammatically incorrect sentences, and incoherent conversations are filtered. LCCC是一套来自于中文社交媒体的对话数据,我们设计了一套严格的数据过滤流程来确保该数据集中对话数据的质量。 这一数据过滤流程中包括一系列手工规则以及若干基于机器学习算法所构建的分类器。 我们所过滤掉的噪声包括:脏字脏词、特殊字符、颜表情、语法不通的语句、上下文不相关的对话等。 ### Supported Tasks and Leaderboards - dialogue-generation: The dataset can be used to train a model for generating dialogue responses. - response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model. ### Languages LCCC is in Chinese LCCC中的对话是中文的 ## Dataset Structure ### Data Instances ```json { "dialog": ["火锅 我 在 重庆 成都 吃 了 七八 顿 火锅", "哈哈哈哈 ! 那 我 的 嘴巴 可能 要 烂掉 !", "不会 的 就是 好 油腻"] } ``` ### Data Fields - `dialog` (list of strings): List of utterances consisting of a dialogue. ### Data Splits We do not provide the offical split for LCCC-large. But we provide a split for LCCC-base: |train|valid|test| |---:|---:|---:| |6,820,506 | 20,000 | 10,000| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information MIT License Copyright (c) 2020 lemon234071 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### Citation Information ```bibtex @inproceedings{wang2020chinese, title={A Large-Scale Chinese Short-Text Conversation Dataset}, author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie}, booktitle={NLPCC}, year={2020}, url={https://arxiv.org/abs/2008.03946} } ``` ### Contributions Thanks to [Yinhe Zheng](https://github.com/silverriver) for adding this dataset.
rr
null
null
null
false
1
false
rr/dd
2022-06-23T04:09:14.000Z
null
false
54319f7cf5d56172bcae5acaa90b046ec6fe4ae6
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/rr/dd/resolve/main/README.md
--- license: afl-3.0 ---
nateraw
null
null
null
false
1
false
nateraw/ade20k-tiny
2022-07-08T06:58:09.000Z
null
false
7bdf563492accd06815580ffdd685adad8b8674b
[]
[ "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "language:en", "license:bsd-3-clause", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:extended|ade20k", "task_categories:image-segmentation", "task_ids:semantic-segment...
https://huggingface.co/datasets/nateraw/ade20k-tiny/resolve/main/README.md
--- annotations_creators: - crowdsourced - expert-generated language_creators: - found language: - en license: - bsd-3-clause multilinguality: - monolingual size_categories: - n<1K source_datasets: - extended|ade20k task_categories: - image-segmentation task_ids: - semantic-segmentation pretty_name: ADE 20K Tiny --- # Dataset Card for ADE 20K Tiny This is a tiny subset of the ADE 20K dataset, which you can find [here](https://huggingface.co/datasets/scene_parse_150).
nateraw
null
null
null
false
41
false
nateraw/country211
2022-07-25T20:27:00.000Z
null
false
22b3b59656bf17b64ef0294318274afc7b5cf6a2
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "license:unknown", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|yfcc100m", "task_categories:image-classification", "task_ids:multi-class-image-classification" ]
https://huggingface.co/datasets/nateraw/country211/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual pretty_name: Country 211 size_categories: - 10K<n<100K source_datasets: - extended|yfcc100m task_categories: - image-classification task_ids: - multi-class-image-classification --- # Dataset Card for Country211 The [Country 211 Dataset](https://github.com/openai/CLIP/blob/main/data/country211.md) from OpenAI. This dataset was built by filtering the images from the YFCC100m dataset that have GPS coordinate corresponding to a ISO-3166 country code. The dataset is balanced by sampling 150 train images, 50 validation images, and 100 test images images for each country.
nateraw
null
null
null
false
1
false
nateraw/rendered-sst2
2022-10-25T10:32:21.000Z
null
false
813d20cfb22b7ac76cb6a272cc8510bd85e8a66e
[]
[ "annotations_creators:machine-generated", "language_creators:crowdsourced", "language:en", "license:unknown", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|sst2", "task_categories:image-classification", "task_ids:multi-class-image-classification" ]
https://huggingface.co/datasets/nateraw/rendered-sst2/resolve/main/README.md
--- annotations_creators: - machine-generated language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual pretty_name: Rendered SST-2 size_categories: - 1K<n<10K source_datasets: - extended|sst2 task_categories: - image-classification task_ids: - multi-class-image-classification --- # Rendered SST-2 The [Rendered SST-2 Dataset](https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md) from Open AI. Rendered SST2 is an image classification dataset used to evaluate the models capability on optical character recognition. This dataset was generated by rendering sentences in the Standford Sentiment Treebank v2 dataset. This dataset contains two classes (positive and negative) and is divided in three splits: a train split containing 6920 images (3610 positive and 3310 negative), a validation split containing 872 images (444 positive and 428 negative), and a test split containing 1821 images (909 positive and 912 negative).
nateraw
null
null
null
false
23
false
nateraw/kitti
2022-07-15T18:17:21.000Z
null
false
5d1705be26da650adea619ee9bc5bf45571bb653
[]
[ "annotations_creators:found", "language_creators:crowdsourced", "language:en", "license:unknown", "multilinguality:monolingual", "size_categories:1K<n<10K", "task_categories:object-detection", "task_ids:object-detection" ]
https://huggingface.co/datasets/nateraw/kitti/resolve/main/README.md
--- annotations_creators: - found language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual pretty_name: Kitti size_categories: - 1K<n<10K task_categories: - object-detection task_ids: - object-detection --- # Dataset Card for Kitti The [Kitti](http://www.cvlibs.net/datasets/kitti/eval_object.php) dataset. The Kitti object detection and object orientation estimation benchmark consists of 7481 training images and 7518 test images, comprising a total of 80.256 labeled objects
EulerianKnight
null
null
null
false
1
false
EulerianKnight/SeaSpongeDetection
2022-06-15T07:00:17.000Z
null
false
ccdfad1bd9c9f5558f073cd7d40e59901a527d85
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/EulerianKnight/SeaSpongeDetection/resolve/main/README.md
--- license: apache-2.0 ---
rr
null
null
null
false
2
false
rr/DDR
2022-06-15T07:59:20.000Z
null
false
023111f79120d8f955d3691c5ed612b8f0a92d00
[]
[ "license:pddl" ]
https://huggingface.co/datasets/rr/DDR/resolve/main/README.md
--- license: pddl ---
sketchai
null
null
null
false
2
false
sketchai/sam-dataset
2022-07-13T13:03:40.000Z
null
false
603ca7858c8c00d7b762ff96d3aa29f1507c6954
[]
[ "annotations_creators:no-annotation", "language_creators:other", "license:lgpl-3.0", "size_categories:1M<n<10M" ]
https://huggingface.co/datasets/sketchai/sam-dataset/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - other language: [] license: - lgpl-3.0 multilinguality: [] pretty_name: Sketch Data Model Dataset size_categories: - 1M<n<10M task_categories: [] task_ids: [] --- # Dataset Card for Sketch Data Model Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/sketchai - **Repository:** https://github.com/sketchai/preprocessing - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This dataset contains over 6M CAD 2D sketches extracted from Onshape. Sketches are stored as python objects in the custom SAM format. SAM leverages the [Sketchgraphs](https://github.com/PrincetonLIPS/SketchGraphs) dataset for industrial needs and allows for easier transfer learning on other CAD softwares. ### Supported Tasks and Leaderboards Tasks: Automatic Sketch Generation, Auto Constraint ## Dataset Structure ### Data Instances The presented npy files contain python pickled objects and require the [flat_array](https://github.com/PrincetonLIPS/SketchGraphs/blob/master/sketchgraphs/data/flat_array.py) module of Sketchgraphs to be loaded. The normalization_output_merged.npy file contains sketch sequences represented as a list of SAM Primitives and Constraints. The sg_merged_final_*.npy files contain encoded constraint graphs of the sketches represented as a dictionnary of arrays. ### Data Fields [Needs More Information] ### Data Splits |Train |Val |Test | |------|------|------| |6M |50k | 50k | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
gcaillaut
null
null
French Wikipedia dataset for Entity Linking
false
91
false
gcaillaut/frwiki_el
2022-09-28T08:52:12.000Z
null
false
b7f5ca3b82fd40f1b5eaae91c720817eb477a2cd
[]
[ "annotations_creators:crowdsourced", "language_creators:machine-generated", "language:fr", "license:wtfpl", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "task_categories:token-classification" ]
https://huggingface.co/datasets/gcaillaut/frwiki_el/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - machine-generated language: - fr license: - wtfpl multilinguality: - monolingual pretty_name: French Wikipedia dataset for Entity Linking size_categories: - 1M<n<10M source_datasets: - original task_categories: - token-classification task_ids: [] --- # Dataset Card for frwiki_good_pages_el ## Dataset Description - Repository: [frwiki_el](https://github.com/GaaH/frwiki_el) - Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr) ### Dataset Summary This dataset contains articles from the French Wikipédia. It is intended to be used to train Entity Linking (EL) systems. Links in articles are used to detect named entities. The dataset `frwiki` contains sentences of each Wikipedia pages. The dataset `entities` contains description for each Wikipedia pages. ### Languages - French ## Dataset Structure ### frwiki ``` { "name": "Title of the page", "wikidata_id": "Identifier of the related Wikidata entity. Can be null.", "wikipedia_id": "Identifier of the Wikipedia page", "wikipedia_url": "URL to the Wikipedia page", "wikidata_url": "URL to the Wikidata page. Can be null.", "sentences" : [ { "text": "text of the current sentence", "ner": ["list", "of", "ner", "labels"], "mention_mappings": [ (start_of_first_mention, end_of_first_mention), (start_of_second_mention, end_of_second_mention) ], "el_wikidata_id": ["wikidata id of first mention", "wikidata id of second mention"], "el_wikipedia_id": [wikipedia id of first mention, wikipedia id of second mention], "el_wikipedia_title": ["wikipedia title of first mention", "wikipedia title of second mention"] } ] "words": ["words", "in", "the", "sentence"], "ner": ["ner", "labels", "of", "each", "words"], "el": ["el", "labels", "of", "each", "words"] } ``` ### entities ``` { "name": "Title of the page", "wikidata_id": "Identifier of the related Wikidata entity. Can be null.", "wikipedia_id": "Identifier of the Wikipedia page", "wikipedia_url": "URL to the Wikipedia page", "wikidata_url": "URL to the Wikidata page. Can be null.", "description": "Description of the entity" } ```
ksbk
null
null
null
false
1
false
ksbk/gg
2022-06-15T09:39:15.000Z
null
false
ae6988dd116193043f09ac8afb7e768e23cb53fd
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/ksbk/gg/resolve/main/README.md
--- license: afl-3.0 ---
pszemraj
null
null
null
false
6
false
pszemraj/multi_fc
2022-06-16T11:57:52.000Z
null
false
5d39a097127b8a6c8342cfc602967ce396478678
[]
[ "arxiv:1909.03242", "license:other", "tags:automatic claim verification", "tags:claims" ]
https://huggingface.co/datasets/pszemraj/multi_fc/resolve/main/README.md
--- license: other tags: - automatic claim verification - claims --- # multiFC - a dataset for the task of **automatic claim verification** - License is currently unknown, please refer to the original paper/[dataset site](http://www.copenlu.com/publication/2019_emnlp_augenstein/): - https://arxiv.org/abs/1909.03242 ## Dataset contents - **IMPORTANT:** the `label` column in the `test` set has dummy values as these were not provided (see original readme section for explanation) ``` DatasetDict({ train: Dataset({ features: ['claimID', 'claim', 'label', 'claimURL', 'reason', 'categories', 'speaker', 'checker', 'tags', 'article title', 'publish date', 'climate', 'entities'], num_rows: 27871 }) test: Dataset({ features: ['claimID', 'claim', 'label', 'claimURL', 'reason', 'categories', 'speaker', 'checker', 'tags', 'article title', 'publish date', 'climate', 'entities'], num_rows: 3487 }) validation: Dataset({ features: ['claimID', 'claim', 'label', 'claimURL', 'reason', 'categories', 'speaker', 'checker', 'tags', 'article title', 'publish date', 'climate', 'entities'], num_rows: 3484 }) }) ``` ## Paper Abstract / Citation > We contribute the largest publicly available dataset of naturally occurring factual claims for the purpose of automatic claim verification. It is collected from 26 fact checking websites in English, paired with textual sources and rich metadata, and labelled for veracity by human expert journalists. We present an in-depth analysis of the dataset, highlighting characteristics and challenges. Further, we present results for automatic veracity prediction, both with established baselines and with a novel method for joint ranking of evidence pages and predicting veracity that outperforms all baselines. Significant performance increases are achieved by encoding evidence, and by modelling metadata. Our best-performing model achieves a Macro F1 of 49.2%, showing that this is a challenging testbed for claim veracity prediction. ``` @inproceedings{conf/emnlp2019/Augenstein, added-at = {2019-10-27T00:00:00.000+0200}, author = {Augenstein, Isabelle and Lioma, Christina and Wang, Dongsheng and Chaves Lima, Lucas and Hansen, Casper and Hansen, Christian and Grue Simonsen, Jakob}, booktitle = {EMNLP}, crossref = {conf/emnlp/2019}, publisher = {Association for Computational Linguistics}, title = {MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims}, year = 2019 } ``` ## Original README Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims The MultiFC is the largest publicly available dataset of naturally occurring factual claims for automatic claim verification. It is collected from 26 English fact-checking websites paired with textual sources and rich metadata and labeled for veracity by human expert journalists. ###### TRAIN and DEV ####### The train and dev files are (tab-separated) and contain the following metadata: claimID, claim, label, claimURL, reason, categories, speaker, checker, tags, article title, publish date, climate, entities Fields that could not be crawled were set as "None." Please refer to Table 11 of our paper to see the summary statistics. ###### TEST ####### The test file follows the same structure. However, we have removed the label. Thus, it only presents 12 metadata. claimID, claim, claim, reason, categories, speaker, checker, tags, article title, publish date, climate, entities Fields that could not be crawled were set as "None." Please refer to Table 11 of our paper to see the summary statistics. ###### Snippets ###### The text of each claim is submitted verbatim as a query to the Google Search API (without quotes). In the folder snippet, we provide the top 10 snippets retrieved. In some cases, fewer snippets are provided since we have excluded the claimURL from the snippets. Each file in the snippets folder is named after the claimID of the claim submitted as a query. Snippets file is (tab-separated) and contains the following metadata: rank_position, title, snippet, snippet_url For more information, please refer to our paper: References: Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019. MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims. In EMNLP. Association for Computational Linguistics. https://copenlu.github.io/publication/2019_emnlp_augenstein/
NbAiLab
null
@inproceedings{, title={}, author={}, booktitle={}, year={2022}, url={https://arxiv.org/abs/} }
This database was created from the LIA set of language recordings
false
6
false
NbAiLab/LIA_speech
2022-06-17T09:20:13.000Z
null
false
81f764148a03ebb15c83e585bdeb7882e745b597
[]
[]
https://huggingface.co/datasets/NbAiLab/LIA_speech/resolve/main/README.md
themindorchestra
null
null
null
false
1
false
themindorchestra/SoundHealing
2022-06-15T13:01:21.000Z
null
false
7b79121ac9e503a2a396851c1e670387ec6ed10a
[]
[ "license:cc-by-nc-sa-4.0" ]
https://huggingface.co/datasets/themindorchestra/SoundHealing/resolve/main/README.md
--- license: cc-by-nc-sa-4.0 ---
codeparrot
null
@article{hendrycksapps2021, title={Measuring Coding Challenge Competence With APPS}, author={Dan Hendrycks and Steven Basart and Saurav Kadavath and Mantas Mazeika and Akul Arora and Ethan Guo and Collin Burns and Samir Puranik and Horace He and Dawn Song and Jacob Steinhardt}, journal={NeurIPS}, year={2021} }
APPS is a benchmark for Python code generation, it includes 10,000 problems, which range from having simple oneline solutions to being substantial algorithmic challenges, for more details please refer to this paper: https://arxiv.org/pdf/2105.09938.pdf.
false
2,049
false
codeparrot/apps
2022-10-20T15:00:15.000Z
null
false
21e74ddf8de1a21436da12e3e653065c5213e9d1
[]
[ "arxiv:2105.09938", "arxiv:2203.07814", "language_creators:crowdsourced", "language_creators:expert-generated", "language:code", "license:mit", "multilinguality:monolingual", "size_categories:unknown", "task_categories:text-generation", "task_ids:language-modeling" ]
https://huggingface.co/datasets/codeparrot/apps/resolve/main/README.md
--- annotations_creators: [] language_creators: - crowdsourced - expert-generated language: ["code"] license: - mit multilinguality: - monolingual pretty_name: APPS size_categories: - unknown source_datasets: [] task_categories: - text-generation task_ids: - language-modeling --- # APPS Dataset ## Dataset Description [APPS](https://arxiv.org/abs/2105.09938) is a benchmark for code generation with 10000 problems. It can be used to evaluate the ability of language models to generate code from natural language specifications. You can also find **APPS metric** in the hub here [codeparrot/apps_metric](https://huggingface.co/spaces/codeparrot/apps_metric). ## Languages The dataset contains questions in English and code solutions in Python. ## Dataset Structure ```python from datasets import load_dataset load_dataset("codeparrot/apps") DatasetDict({ train: Dataset({ features: ['problem_id', 'question', 'solutions', 'input_output', 'difficulty', 'url', 'starter_code'], num_rows: 5000 }) test: Dataset({ features: ['problem_id', 'question', 'solutions', 'input_output', 'difficulty', 'url', 'starter_code'], num_rows: 5000 }) }) ``` ### How to use it You can load and iterate through the dataset with the following two lines of code for the train split: ```python from datasets import load_dataset import json ds = load_dataset("codeparrot/apps", split="train") sample = next(iter(ds)) # non-empty solutions and input_output features can be parsed from text format this way: sample["solutions"] = json.loads(sample["solutions"]) sample["input_output"] = json.loads(sample["input_output"]) print(sample) #OUTPUT: { 'problem_id': 0, 'question': 'Polycarp has $n$ different binary words. A word called binary if it contains only characters \'0\' and \'1\'. For example...', 'solutions': ["for _ in range(int(input())):\n n = int(input())\n mass = []\n zo = 0\n oz = 0\n zz = 0\n oo = 0\n...",...], 'input_output': {'inputs': ['4\n4\n0001\n1000\n0011\n0111\n3\n010\n101\n0\n2\n00000\n00001\n4\n01\n001\n0001\n00001\n'], 'outputs': ['1\n3 \n-1\n0\n\n2\n1 2 \n']}, 'difficulty': 'interview', 'url': 'https://codeforces.com/problemset/problem/1259/D', 'starter_code': ''} } ``` Each sample consists of a programming problem formulation in English, some ground truth Python solutions, test cases that are defined by their inputs and outputs and function name if provided, as well as some metadata regarding the difficulty level of the problem and its source. If a sample has non empty `input_output` feature, you can read it as a dictionary with keys `inputs` and `outputs` and `fn_name` if it exists, and similarily you can parse the solutions into a list of solutions as shown in the code above. You can also filter the dataset for the difficulty level: Introductory, Interview and Competition. Just pass the list of difficulties as a list. E.g. if you want the most challenging problems, you need to select the competition level: ```python ds = load_dataset("codeparrot/apps", split="train", difficulties=["competition"]) print(next(iter(ds))["question"]) #OUTPUT: """\ Codefortia is a small island country located somewhere in the West Pacific. It consists of $n$ settlements connected by ... For each settlement $p = 1, 2, \dots, n$, can you tell what is the minimum time required to travel between the king's residence and the parliament house (located in settlement $p$) after some roads are abandoned? -----Input----- The first line of the input contains four integers $n$, $m$, $a$ and $b$ ... -----Output----- Output a single line containing $n$ integers ... -----Examples----- Input 5 5 20 25 1 2 25 ... Output 0 25 60 40 20 ... ``` ### Data Fields |Field|Type|Description| |---|---|---| |problem_id|int|problem id| |question|string|problem description| |solutions|string|some python solutions| |input_output|string|Json string with "inputs" and "outputs" of the test cases, might also include "fn_name" the name of the function| |difficulty|string|difficulty level of the problem| |url|string|url of the source of the problem| |starter_code|string|starter code to include in prompts| we mention that only few samples have `fn_name` and `starter_code` specified ### Data Splits The dataset contains a train and test splits with 5000 samples each. ### Dataset Statistics * 10000 coding problems * 131777 test cases * all problems have a least one test case except 195 samples in the train split * for tests split, the average number of test cases is 21.2 * average length of a problem is 293.2 words * all files have ground-truth solutions except 1235 samples in the test split ## Dataset Creation To create the APPS dataset, the authors manually curated problems from open-access sites where programmers share problems with each other, including Codewars, AtCoder, Kattis, and Codeforces. For more details please refer to the original [paper](https://arxiv.org/pdf/2105.09938.pdf). ## Considerations for Using the Data In [AlphaCode](https://arxiv.org/pdf/2203.07814v1.pdf) the authors found that this dataset can generate many false positives during evaluation, where incorrect submissions are marked as correct due to lack of test coverage. ## Citation Information ``` @article{hendrycksapps2021, title={Measuring Coding Challenge Competence With APPS}, author={Dan Hendrycks and Steven Basart and Saurav Kadavath and Mantas Mazeika and Akul Arora and Ethan Guo and Collin Burns and Samir Puranik and Horace He and Dawn Song and Jacob Steinhardt}, journal={NeurIPS}, year={2021} } ```
trustwallet
null
null
null
false
1
false
trustwallet/33
2022-06-15T17:31:41.000Z
null
false
83e07480c44954d638b087ecd1f6af7934ba9d68
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/trustwallet/33/resolve/main/README.md
--- license: apache-2.0 --- Trust**wallet customer service Support Number +1-818*751*8351
justpyschitry
null
null
null
false
1
false
justpyschitry/autotrain-data-Psychiatry_Article_Identifier
2022-06-15T21:34:39.000Z
null
false
1413ed90879c1ebb1b7016388a6ef43e7765d295
[]
[ "task_categories:text-classification" ]
https://huggingface.co/datasets/justpyschitry/autotrain-data-Psychiatry_Article_Identifier/resolve/main/README.md
--- task_categories: - text-classification --- # AutoTrain Dataset for project: Psychiatry_Article_Identifier ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project Psychiatry_Article_Identifier. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "diffuse actinic keratinocyte dysplasia", "target": 15 }, { "text": "cholesterol atheroembolism", "target": 8 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=20, names=['Certain infectious or parasitic diseases', 'Developmental anaomalies', 'Diseases of the blood or blood forming organs', 'Diseases of the genitourinary system', 'Mental behavioural or neurodevelopmental disorders', 'Neoplasms', 'certain conditions originating in the perinatal period', 'conditions related to sexual health', 'diseases of the circulatroy system', 'diseases of the digestive system', 'diseases of the ear or mastoid process', 'diseases of the immune system', 'diseases of the musculoskeletal system or connective tissue', 'diseases of the nervous system', 'diseases of the respiratory system', 'diseases of the skin', 'diseases of the visual system', 'endocrine nutritional or metabolic diseases', 'pregnanacy childbirth or the puerperium', 'sleep-wake disorders'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 9828 | | valid | 2468 |
omarxadel
null
null
null
false
18
false
omarxadel/MaWPS-ar
2022-07-12T15:31:07.000Z
null
false
ca5836dcea910a720fe456e3d3c9b68206507eeb
[]
[ "annotations_creators:crowdsourced", "language:en", "language:ar", "language_creators:found", "license:mit", "multilinguality:multilingual", "size_categories:1K<n<10K", "task_categories:text2text-generation", "task_ids:explanation-generation" ]
https://huggingface.co/datasets/omarxadel/MaWPS-ar/resolve/main/README.md
--- annotations_creators: - crowdsourced language: - en - ar language_creators: - found license: - mit multilinguality: - multilingual pretty_name: MAWPS_ar size_categories: - 1K<n<10K source_datasets: [] task_categories: - text2text-generation task_ids: - explanation-generation --- # Dataset Card for MAWPS_ar ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary MAWPS: A Math Word Problem Repository ### Supported Tasks Math Word Problem Solving ### Languages Supports Arabic and English ## Dataset Structure ### Data Fields - `text_en`: a `string` feature. - `text_ar`: a `string` feature. - `eqn`: a `string` feature. ### Data Splits |train|validation|test| |----:|---------:|---:| | 3636| 1040| 520| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [Rik Koncel-Kedziorski**, Subhro Roy**, Aida Amini, Nate Kushman and Hannaneh Hajishirzi.](https://aclanthology.org/N16-1136.pdf) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Contributions Special thanks to Associate Professor Marwan Torki and all my colleagues in CC491N (NLP) class for helping me translate this dataset.
sounakray1997
null
null
null
false
1
false
sounakray1997/CoNLLU_WikiNEuRal
2022-06-15T23:16:23.000Z
null
false
6bfc581b1b87cbd2ff08a7369f34a10fbed01c4b
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/sounakray1997/CoNLLU_WikiNEuRal/resolve/main/README.md
--- license: apache-2.0 ---
nateraw
null
@misc{https://doi.org/10.48550/arxiv.1705.06950, doi = {10.48550/ARXIV.1705.06950}, url = {https://arxiv.org/abs/1705.06950}, author = {Kay, Will and Carreira, Joao and Simonyan, Karen and Zhang, Brian and Hillier, Chloe and Vijayanarasimhan, Sudheendra and Viola, Fabio and Green, Tim and Back, Trevor and Natsev, Paul and Suleyman, Mustafa and Zisserman, Andrew}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {The Kinetics Human Action Video Dataset}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} }
null
false
4
false
nateraw/kinetics
2022-06-16T02:30:12.000Z
null
false
2f2f43e7e54c6dcebe550767c8a284c26186e030
[]
[ "license:cc-by-4.0" ]
https://huggingface.co/datasets/nateraw/kinetics/resolve/main/README.md
--- license: cc-by-4.0 ---
HALLA
null
null
null
false
1
false
HALLA/Rhubarbify
2022-06-16T03:25:29.000Z
null
false
3906031fe2e1a2d7766f0847014d3eb6229c6351
[]
[ "license:other" ]
https://huggingface.co/datasets/HALLA/Rhubarbify/resolve/main/README.md
--- license: other ---
nehruperumalla
null
@article{park2019cord, title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing}, author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk} booktitle={Document Intelligence Workshop at Neural Information Processing Systems} year={2019} }
https://huggingface.co/datasets/katanaml/cord
false
1
false
nehruperumalla/forms
2022-06-16T06:38:45.000Z
null
false
6adb717a2503f1d49af178c0497c2529f1a8e68f
[]
[]
https://huggingface.co/datasets/nehruperumalla/forms/resolve/main/README.md
# CORD: A Consolidated Receipt Dataset for Post-OCR Parsing CORD dataset is cloned from [clovaai](https://github.com/clovaai/cord) GitHub repo - Box coordinates are normalized against image width/height - Labels with very few occurrences are replaced with O: ``` replacing_labels = ['menu.etc', 'menu.itemsubtotal', 'menu.sub_etc', 'menu.sub_unitprice', 'menu.vatyn', 'void_menu.nm', 'void_menu.price', 'sub_total.othersvc_price'] ``` Check for more info [Sparrow](https://github.com/katanaml/sparrow) ## Citation ### CORD: A Consolidated Receipt Dataset for Post-OCR Parsing ``` @article{park2019cord, title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing}, author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk} booktitle={Document Intelligence Workshop at Neural Information Processing Systems} year={2019} } ``` ### Post-OCR parsing: building simple and robust parser via BIO tagging ``` @article{hwang2019post, title={Post-OCR parsing: building simple and robust parser via BIO tagging}, author={Hwang, Wonseok and Kim, Seonghyeon and Yim, Jinyeong and Seo, Minjoon and Park, Seunghyun and Park, Sungrae and Lee, Junyeop and Lee, Bado and Lee, Hwalsuk} booktitle={Document Intelligence Workshop at Neural Information Processing Systems} year={2019} } ```
scikit-learn
null
null
null
false
6
false
scikit-learn/imdb
2022-06-16T09:11:24.000Z
null
false
f27efa2241b715868b4e2c6a2ead19ce067b3b48
[]
[ "license:other" ]
https://huggingface.co/datasets/scikit-learn/imdb/resolve/main/README.md
--- license: other --- This is the sentiment analysis dataset based on IMDB reviews initially released by Stanford University. ``` This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. Raw text and already processed bag of words formats are provided. See the README file contained in the release for more details. ``` [Here](http://ai.stanford.edu/~amaas/data/sentiment/) is the redirection. ``` @InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http://www.aclweb.org/anthology/P11-1015} } ```
codeparrot
null
null
null
false
18
false
codeparrot/codeparrot-valid-near-deduplication
2022-06-21T19:06:58.000Z
null
false
428766bc07a9e8699b7782e8557b7e07d32923f3
[]
[]
https://huggingface.co/datasets/codeparrot/codeparrot-valid-near-deduplication/resolve/main/README.md
# CodeParrot 🦜 Dataset after near deduplication (validation) ## Dataset Description A dataset of Python files from Github. We performed near deduplication of this dataset split [codeparrot-clean-train](https://huggingface.co/datasets/codeparrot/codeparrot-clean-valid) from [codeparrot-clean](https://huggingface.co/datasets/codeparrot/codeparrot-clean#codeparrot-%F0%9F%A6%9C-dataset-cleaned). Exact deduplication can miss a fair amount of nearly identical files. We used MinHash with a Jaccard threshold (default=0.85) to create duplicate clusters. Then these clusters are reduced to unique files based on the exact Jaccard similarity. Fore more details, please refer to this [repo](https://github.com/huggingface/transformers/tree/main/examples/research_projects/codeparrot).
vesteinn
null
@misc{sosialurin-pos, title = {Marking av teldutøkum tekstsavn}, author = {Zakaris Svabo Hansen, Heini Justinussen, and Mortan Ólason}, url = {http://ark.axeltra.com/index.php?type=person&lng=en&id=18}, year = {2004} }
The corpus that has been created consists of ca. 100.000 words of text from the [Faroese] newspaper Sosialurin. Each word is tagged with grammatical information (word class, gender, number etc.)
false
143
false
vesteinn/sosialurin-faroese-pos
2022-06-16T15:49:46.000Z
null
false
d530faa01d7d9be759260f752810691437473a25
[]
[ "license:cc-by-4.0" ]
https://huggingface.co/datasets/vesteinn/sosialurin-faroese-pos/resolve/main/README.md
--- license: cc-by-4.0 ---
codeparrot
null
null
null
false
5
false
codeparrot/codeparrot-train-near-deduplication
2022-06-21T19:07:13.000Z
null
false
0809fc7058613aa685e5b99a7987eee7ec72171f
[]
[]
https://huggingface.co/datasets/codeparrot/codeparrot-train-near-deduplication/resolve/main/README.md
# CodeParrot 🦜 Dataset after near deduplication (train) ## Dataset Description A dataset of Python files from Github. We performed near deduplication of this dataset split [codeparrot-clean-train](https://huggingface.co/datasets/codeparrot/codeparrot-clean-train) from [codeparrot-clean](https://huggingface.co/datasets/codeparrot/codeparrot-clean#codeparrot-%F0%9F%A6%9C-dataset-cleaned). Exact deduplication can miss a fair amount of nearly identical files. We used MinHash with a Jaccard threshold (default=0.85) to create duplicate clusters. Then these clusters are reduced to unique files based on the exact Jaccard similarity. Fore more details, please refer to this [repo](https://github.com/huggingface/transformers/tree/main/examples/research_projects/codeparrot).
khalidalt
null
@article{tydiqa, title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} year = {2020}, journal = {Transactions of the Association for Computational Linguistics} }
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD).
false
48
false
khalidalt/tydiqa-primary
2022-07-28T21:56:04.000Z
tydi-qa
false
b332d9a0f9ffbd9f6608dd1ea2d90a18b827f78a
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "language:ar", "language:bn", "language:fi", "language:id", "language:ja", "language:sw", "language:ko", "language:ru", "language:te", "language:th", "license:apache-2.0", "multilinguality:multilingual"...
https://huggingface.co/datasets/khalidalt/tydiqa-primary/resolve/main/README.md
--- pretty_name: TyDi QA annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en - ar - bn - fi - id - ja - sw - ko - ru - te - th license: - apache-2.0 multilinguality: - multilingual size_categories: - unknown source_datasets: - extended|wikipedia task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: tydi-qa --- # Dataset Card for "tydiqa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3726.74 MB - **Size of the generated dataset:** 5812.92 MB - **Total amount of disk used:** 9539.67 MB ### Dataset Summary TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### primary_task - **Size of downloaded dataset files:** 1863.37 MB - **Size of the generated dataset:** 5757.59 MB - **Total amount of disk used:** 7620.96 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "annotations": { "minimal_answers_end_byte": [-1, -1, -1], "minimal_answers_start_byte": [-1, -1, -1], "passage_answer_candidate_index": [-1, -1, -1], "yes_no_answer": ["NONE", "NONE", "NONE"] }, "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...", "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร", "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...", "language": "thai", "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...", "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..." } ``` ### Data Fields The data fields are the same among all splits. #### primary_task - `passage_answer_candidates`: a dictionary feature containing: - `plaintext_start_byte`: a `int32` feature. - `plaintext_end_byte`: a `int32` feature. - `question_text`: a `string` feature. - `document_title`: a `string` feature. - `language`: a `string` feature. - `annotations`: a dictionary feature containing: - `passage_answer_candidate_index`: a `int32` feature. - `minimal_answers_start_byte`: a `int32` feature. - `minimal_answers_end_byte`: a `int32` feature. - `yes_no_answer`: a `string` feature. - `document_plaintext`: a `string` feature. - `document_url`: a `string` feature. ### Data Splits | name | train | validation | | -------------- | -----: | ---------: | | primary_task | 166916 | 18670 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{tydiqa, title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} year = {2020}, journal = {Transactions of the Association for Computational Linguistics} } ``` ``` @inproceedings{ruder-etal-2021-xtreme, title = "{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation", author = "Ruder, Sebastian and Constant, Noah and Botha, Jan and Siddhant, Aditya and Firat, Orhan and Fu, Jinlan and Liu, Pengfei and Hu, Junjie and Garrette, Dan and Neubig, Graham and Johnson, Melvin", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.802", doi = "10.18653/v1/2021.emnlp-main.802", pages = "10215--10245", } } ```
nightingal3
null
null
null
false
24
false
nightingal3/fig-qa
2022-07-01T17:04:06.000Z
null
false
443f8dce657acf9581d0cf719e5f470d46612b84
[]
[ "arxiv:2204.12632", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "license:mit", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:multiple-choice", "task_ids:...
https://huggingface.co/datasets/nightingal3/fig-qa/resolve/main/README.md
--- annotations_creators: - expert-generated - crowdsourced language_creators: - crowdsourced language: - en license: - mit multilinguality: - monolingual pretty_name: Fig-QA size_categories: - 10K<n<100K source_datasets: - original task_categories: - multiple-choice task_ids: - multiple-choice-qa --- # Dataset Card for Fig-QA ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Splits](#data-splits) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/nightingal3/Fig-QA - **Paper:** https://arxiv.org/abs/2204.12632 - **Leaderboard:** https://explainaboard.inspiredco.ai/leaderboards?dataset=fig_qa - **Point of Contact:** emmy@cmu.edu ### Dataset Summary This is the dataset for the paper [Testing the Ability of Language Models to Interpret Figurative Language](https://arxiv.org/abs/2204.12632). Fig-QA consists of 10256 examples of human-written creative metaphors that are paired as a Winograd schema. It can be used to evaluate the commonsense reasoning of models. The metaphors themselves can also be used as training data for other tasks, such as metaphor detection or generation. ### Supported Tasks and Leaderboards You can evaluate your models on the test set by submitting to the [leaderboard](https://explainaboard.inspiredco.ai/leaderboards?dataset=fig_qa) on Explainaboard. Click on "New" and select `qa-multiple-choice` for the task field. Select `accuracy` for the metric. You should upload results in the form of a system output file in JSON or JSONL format. ### Languages English only currently ### Data Splits Train-{S, M(no suffix), XL}: different training set sizes Dev Test (labels not provided for test set) ## Considerations for Using the Data ### Discussion of Biases These metaphors are human-generated and may contain insults or other explicit content. Authors of the paper manually removed offensive content, but users should keep in mind that some potentially offensive content may remain in the dataset. ## Additional Information ### Licensing Information MIT License ### Citation Information If you found the dataset useful, please cite this paper: @misc{https://doi.org/10.48550/arxiv.2204.12632, doi = {10.48550/ARXIV.2204.12632}, url = {https://arxiv.org/abs/2204.12632}, author = {Liu, Emmy and Cui, Chen and Zheng, Kenneth and Neubig, Graham}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Testing the Ability of Language Models to Interpret Figurative Language}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} }
nateraw
null
null
null
false
2
false
nateraw/rice-image-dataset
2022-07-08T06:36:39.000Z
null
false
f0c06c4a962c9e8e0a4f8a0ca9d6494d7d9d7e81
[]
[ "license:cc0-1.0", "kaggle_id:muratkokludataset/rice-image-dataset" ]
https://huggingface.co/datasets/nateraw/rice-image-dataset/resolve/main/README.md
--- license: - cc0-1.0 kaggle_id: muratkokludataset/rice-image-dataset --- # Dataset Card for Rice Image Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/muratkokludataset/rice-image-dataset - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Rice Image Dataset DATASET: https://www.muratkoklu.com/datasets/ Citation Request: See the articles for more detailed information on the data. Koklu, M., Cinar, I., & Taspinar, Y. S. (2021). Classification of rice varieties with deep learning methods. Computers and Electronics in Agriculture, 187, 106285. https://doi.org/10.1016/j.compag.2021.106285 Cinar, I., & Koklu, M. (2021). Determination of Effective and Specific Physical Features of Rice Varieties by Computer Vision In Exterior Quality Inspection. Selcuk Journal of Agriculture and Food Sciences, 35(3), 229-243. https://doi.org/10.15316/SJAFS.2021.252 Cinar, I., & Koklu, M. (2022). Identification of Rice Varieties Using Machine Learning Algorithms. Journal of Agricultural Sciences https://doi.org/10.15832/ankutbd.862482 Cinar, I., & Koklu, M. (2019). Classification of Rice Varieties Using Artificial Intelligence Methods. International Journal of Intelligent Systems and Applications in Engineering, 7(3), 188-194. https://doi.org/10.18201/ijisae.2019355381 DATASET: https://www.muratkoklu.com/datasets/ Highlights • Arborio, Basmati, Ipsala, Jasmine and Karacadag rice varieties were used. • The dataset (1) has 75K images including 15K pieces from each rice variety. The dataset (2) has 12 morphological, 4 shape and 90 color features. • ANN, DNN and CNN models were used to classify rice varieties. • Classified with an accuracy rate of 100% through the CNN model created. • The models used achieved successful results in the classification of rice varieties. Abstract Rice, which is among the most widely produced grain products worldwide, has many genetic varieties. These varieties are separated from each other due to some of their features. These are usually features such as texture, shape, and color. With these features that distinguish rice varieties, it is possible to classify and evaluate the quality of seeds. In this study, Arborio, Basmati, Ipsala, Jasmine and Karacadag, which are five different varieties of rice often grown in Turkey, were used. A total of 75,000 grain images, 15,000 from each of these varieties, are included in the dataset. A second dataset with 106 features including 12 morphological, 4 shape and 90 color features obtained from these images was used. Models were created by using Artificial Neural Network (ANN) and Deep Neural Network (DNN) algorithms for the feature dataset and by using the Convolutional Neural Network (CNN) algorithm for the image dataset, and classification processes were performed. Statistical results of sensitivity, specificity, prediction, F1 score, accuracy, false positive rate and false negative rate were calculated using the confusion matrix values of the models and the results of each model were given in tables. Classification successes from the models were achieved as 99.87% for ANN, 99.95% for DNN and 100% for CNN. With the results, it is seen that the models used in the study in the classification of rice varieties can be applied successfully in this field. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@muratkokludataset](https://kaggle.com/muratkokludataset) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
Littencito
null
null
null
false
1
false
Littencito/A
2022-06-16T21:23:29.000Z
null
false
2be45e008aa0aa604ec6b6a13509ac5139ee839e
[]
[ "license:bsd-3-clause-clear" ]
https://huggingface.co/datasets/Littencito/A/resolve/main/README.md
--- license: bsd-3-clause-clear ---
watakandai
null
null
null
false
1
false
watakandai/LTLtraces
2022-07-16T22:50:38.000Z
null
false
8a6ac08c60a12d9c9239a89083fbd566d6de3409
[]
[]
https://huggingface.co/datasets/watakandai/LTLtraces/resolve/main/README.md
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@watakandai](https://github.com/watakandai) for adding this dataset.
nateraw
null
@misc{https://doi.org/10.48550/arxiv.1705.06950, doi = {10.48550/ARXIV.1705.06950}, url = {https://arxiv.org/abs/1705.06950}, author = {Kay, Will and Carreira, Joao and Simonyan, Karen and Zhang, Brian and Hillier, Chloe and Vijayanarasimhan, Sudheendra and Viola, Fabio and Green, Tim and Back, Trevor and Natsev, Paul and Suleyman, Mustafa and Zisserman, Andrew}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {The Kinetics Human Action Video Dataset}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} }
null
false
1
false
nateraw/kineticstest
2022-06-17T00:46:45.000Z
null
false
b99a158b2a81e0c474e042051811c558870e9d0d
[]
[ "license:cc-by-4.0" ]
https://huggingface.co/datasets/nateraw/kineticstest/resolve/main/README.md
--- license: cc-by-4.0 ---
s3prl
null
null
null
false
1
false
s3prl/SNIPS
2022-06-25T07:05:22.000Z
null
false
12b3fec40a137551f912ff1f8a53b39d2871b094
[]
[ "license:mit" ]
https://huggingface.co/datasets/s3prl/SNIPS/resolve/main/README.md
--- license: mit ---
Rizqi
null
null
null
false
1
false
Rizqi/emotion-raw
2022-06-17T07:44:57.000Z
null
false
8d42ab677980afbcdf6ed49c3dbed438cf661891
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/Rizqi/emotion-raw/resolve/main/README.md
--- license: afl-3.0 ---
vadis
null
@misc{sv-ident, author={vadis-project}, title={SV-Ident}, year={2022}, url={https://github.com/vadis-project/sv-ident}, }
The SV-Ident corpus (version 0.3) is a collection of 4,248 expert-annotated English and German sentences from social science publications, supporting the task of multi-label text classification.
false
3
false
vadis/sv-ident
2022-11-07T20:51:06.000Z
sv-ident
false
a5994401ede160334e601cf243bb5b632f2d1e32
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "language:de", "license:mit", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "task_categories:text-classification", "task_ids:multi-label-classification", "task_...
https://huggingface.co/datasets/vadis/sv-ident/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en - de license: - mit multilinguality: - multilingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - multi-label-classification - semantic-similarity-classification pretty_name: SV-Ident paperswithcode_id: sv-ident --- # Dataset Card for SV-Ident ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://vadis-project.github.io/sv-ident-sdp2022/ - **Repository:** https://github.com/vadis-project/sv-ident - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** svident2022@googlegroups.com ### Dataset Summary SV-Ident comprises 4,248 sentences from social science publications in English and German. The data is the official data for the Shared Task: “Survey Variable Identification in Social Science Publications” (SV-Ident) 2022. Visit the homepage to find out more details about the shared task. ### Supported Tasks and Leaderboards The dataset supports: - **Variable Detection**: identifying whether a sentence contains a variable mention or not. - **Variable Disambiguation**: identifying which variable from a given vocabulary is mentioned in a sentence. **NOTE**: for this task, you will need to also download the variable metadata from [here](https://bit.ly/3Nuvqdu). ### Languages The text in the dataset is in English and German, as written by researchers. The domain of the texts is scientific publications in the social sciences. ## Dataset Structure ### Data Instances ``` { "sentence": "Our point, however, is that so long as downward (favorable comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.", "is_variable": 1, "variable": ["exploredata-ZA5400_VarV66", "exploredata-ZA5400_VarV53"], "research_data": ["ZA5400"], "doc_id": "73106", "uuid": "b9fbb80f-3492-4b42-b9d5-0254cc33ac10", "lang": "en", } ``` ### Data Fields The following data fields are provided for documents: ``` `sentence`: Textual instance, which may contain a variable mention.<br /> `is_variable`: Label, whether the textual instance contains a variable mention (1) or not (0). This column can be used for Task 1 (Variable Detection).<br /> `variable`: Variables (separated by a comma ";") that are mentioned in the textual instance. This column can be used for Task 2 (Variable Disambiguation). Variables with the "unk" tag could not be mapped to a unique variable.<br /> `research_data`: Research data IDs (separated by a ";") that are relevant for each instance (and in general for each "doc_id").<br /> `doc_id`: ID of the source document. Each document is written in one language (either English or German).<br /> `uuid`: Unique ID of the instance in uuid4 format.<br /> `lang`: Language of the sentence. ``` The language for each document can be found in the document-language mapping file [here](https://github.com/vadis-project/sv-ident/blob/main/data/train/document_languages.json), which maps `doc_id` to a language code (`en`, `de`). The variables metadata (i.e., the vocabulary) can be downloaded from this [link](https://bit.ly/3Nuvqdu). Note, that each `research_data` contains hundreds of variables (these can be understood as the corpus of documents to choose the most relevant from). If the variable has an "unk" tag, it means that the sentence contains a variable that has not been disambiguated. Such sentences could be used for Task 1 and filtered out for Task 2. The metadata file has the following format: ``` { "research_data_id_1": { "variable_id_1": VARIABLE_METADATA, ... "variable_id_n": VARIABLE_METADATA, }, ... "research_data_id_n": {...}, } ``` Each variable may contain all (or some) of the following values: ``` study_title: The title of the research data study. variable_label: The label of the variable. variable_name: The name of the variable. question_text: The question of the variable in the original language. question_text_en: The question of the variable in English. sub_question: The sub-question of the variable. item_categories: The item categories of the variable. answer_categories: The answers of the variable. topic: The topics of the variable in the original language. topic_en: The topics of the variable in English. ``` ### Data Splits | Split | Number of sentences | | ------------------- | ------------------------------------ | | Train | 3,823 | | Validation | 425 | ## Dataset Creation ### Curation Rationale The dataset was curated by the VADIS project (https://vadis-project.github.io/). The documents were annotated by two expert annotators. ### Source Data #### Initial Data Collection and Normalization The original data are available at GESIS (https://www.gesis.org/home) in an unprocessed format. #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? The documents were annotated by two expert annotators. ### Personal and Sensitive Information The dataset does not include personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators VADIS project (https://vadis-project.github.io/) ### Licensing Information All documents originate from the Social Science Open Access Repository (SSOAR) and are licensed accordingly. The original document URLs are provided in [document_urls.json](https://github.com/vadis-project/sv-ident/blob/main/data/train/document_urlsjson). For more information on licensing, please refer to the terms and conditions on the [SSAOR Grant of Licenses page](https://www.gesis.org/en/ssoar/home/information/grant-of-licences). ### Citation Information ``` @inproceedings{tsereteli-etal-2022-overview, title = "Overview of the {SV}-Ident 2022 Shared Task on Survey Variable Identification in Social Science Publications", author = "Tsereteli, Tornike and Kartal, Yavuz Selim and Ponzetto, Simone Paolo and Zielinski, Andrea and Eckert, Kai and Mayr, Philipp", booktitle = "Proceedings of the Third Workshop on Scholarly Document Processing", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.sdp-1.29", pages = "229--246", abstract = "In this paper, we provide an overview of the SV-Ident shared task as part of the 3rd Workshop on Scholarly Document Processing (SDP) at COLING 2022. In the shared task, participants were provided with a sentence and a vocabulary of variables, and asked to identify which variables, if any, are mentioned in individual sentences from scholarly documents in full text. Two teams made a total of 9 submissions to the shared task leaderboard. While none of the teams improve on the baseline systems, we still draw insights from their submissions. Furthermore, we provide a detailed evaluation. Data and baselines for our shared task are freely available at \url{https://github.com/vadis-project/sv-ident}.", } ``` ### Contributions [Needs More Information]
alexfrancow
null
null
null
false
1
false
alexfrancow/wordpress
2022-06-17T09:51:09.000Z
null
false
f7bfc4fb11e924b0e8afbcdf07a4a754e9061fb3
[]
[]
https://huggingface.co/datasets/alexfrancow/wordpress/resolve/main/README.md
Bytes of **javascript** and **css** files in **wordpress** applications for multiple classification and identification **wordpress** versions.
Sampson2022
null
@article{2016arXiv160605250R, author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev}, Konstantin and {Liang}, Percy}, title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}", journal = {arXiv e-prints}, year = 2016, eid = {arXiv:1606.05250}, pages = {arXiv:1606.05250}, archivePrefix = {arXiv}, eprint = {1606.05250}, }
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
false
1
false
Sampson2022/demo2
2022-06-20T11:39:48.000Z
null
false
4feb502bd163324dd183833c742d770d14fb8009
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/Sampson2022/demo2/resolve/main/README.md
--- license: apache-2.0 ---
bengaliAI
null
null
null
false
5
false
bengaliAI/CommonVoiceBangla
2022-07-01T00:46:28.000Z
null
false
f86f1751d4d21def7b518d635d1f99fb98b5bc4c
[]
[ "license:cc0-1.0" ]
https://huggingface.co/datasets/bengaliAI/CommonVoiceBangla/resolve/main/README.md
--- license: cc0-1.0 --- How to load the Common Voice Bangla dataset directly with the datasets library Run 1) from datasets import load_dataset 2) dataset = load_dataset("bengaliAI/CommonVoiceBangla", "bn", delimiter='\t')
BeIR
null
null
null
false
2
false
BeIR/nfcorpus-generated-queries
2022-10-23T06:12:19.000Z
beir
false
5b041f9c0409219213594593e95d185e993a7422
[]
[ "language:en", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:1M<n<10M", "size_categories:100k<n<1M", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "task_categories:text-retrieval", "task_ids:passage-retrieval", "task_ids:entity-l...
https://huggingface.co/datasets/BeIR/nfcorpus-generated-queries/resolve/main/README.md
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
BeIR
null
null
null
false
1
false
BeIR/scifact-generated-queries
2022-10-23T06:12:34.000Z
beir
false
74e348ee0157fec51315a3c10346152603714356
[]
[ "language:en", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:1M<n<10M", "size_categories:100k<n<1M", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "task_categories:text-retrieval", "task_ids:passage-retrieval", "task_ids:entity-l...
https://huggingface.co/datasets/BeIR/scifact-generated-queries/resolve/main/README.md
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
BeIR
null
null
null
false
7
false
BeIR/scidocs-generated-queries
2022-10-23T06:12:52.000Z
beir
false
cf98811dd1e94557e7fc39d30f670512bb747aee
[]
[ "language:en", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:1M<n<10M", "size_categories:100k<n<1M", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "task_categories:text-retrieval", "task_ids:passage-retrieval", "task_ids:entity-l...
https://huggingface.co/datasets/BeIR/scidocs-generated-queries/resolve/main/README.md
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
BeIR
null
null
null
false
3
false
BeIR/fiqa-generated-queries
2022-10-23T06:13:18.000Z
beir
false
29c8b8c4259ea3d8258e7e91c87b41c86ef860c1
[]
[ "language:en", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:1M<n<10M", "size_categories:100k<n<1M", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "task_categories:text-retrieval", "task_ids:passage-retrieval", "task_ids:entity-l...
https://huggingface.co/datasets/BeIR/fiqa-generated-queries/resolve/main/README.md
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.