author
stringlengths
2
29
cardData
null
citation
stringlengths
0
9.58k
description
stringlengths
0
5.93k
disabled
bool
1 class
downloads
float64
1
1M
gated
bool
2 classes
id
stringlengths
2
108
lastModified
stringlengths
24
24
paperswithcode_id
stringlengths
2
45
private
bool
2 classes
sha
stringlengths
40
40
siblings
list
tags
list
readme_url
stringlengths
57
163
readme
stringlengths
0
977k
mteb
null
null
null
false
114
false
mteb/biosses-sts
2022-09-27T19:13:38.000Z
null
false
9ee918f184421b6bd48b78f6c714d86546106103
[]
[ "language:en" ]
https://huggingface.co/datasets/mteb/biosses-sts/resolve/main/README.md
--- language: - en ---
mteb
null
null
null
false
463
false
mteb/stsbenchmark-sts
2022-09-27T19:11:21.000Z
null
false
8913289635987208e6e7c72789e4be2fe94b6abd
[]
[ "language:en" ]
https://huggingface.co/datasets/mteb/stsbenchmark-sts/resolve/main/README.md
--- language: - en ---
arka0821
null
@article{lu2020multi, title={Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles}, author={Arka Das, India}, journal={arXiv preprint arXiv:2010.14235}, year={2022} }
Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references.
false
1
false
arka0821/multi_document_summarization
2022-10-20T19:13:26.000Z
multi-document
false
47054de4458827ac3fb5136f5f953ddf3deb3c53
[]
[ "arxiv:2010.14235", "annotations_creators:found", "language_creators:found", "language:en", "license:unknown", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:summarization", "task_ids:summarization-other-paper-abstract-generation" ]
https://huggingface.co/datasets/arka0821/multi_document_summarization/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization task_ids: - summarization-other-paper-abstract-generation paperswithcode_id: multi-document pretty_name: Multi-Document --- # Dataset Card for Multi-Document ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Multi-Document repository](https://github.com/arka0821/multi_document_summarization) - **Paper:** [Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235) ### Dataset Summary Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English ## Dataset Structure ### Data Instances {"id": "n3ByHGrxH3bvfrvF", "docs": [{"id": "1394519630182457344", "text": "Clover Bio's COVID-19 vaccine candidate shows immune response against SARS-CoV-2 variants in mouse model https://t.co/wNWa9GQux5"}, {"id": "1398154482463170561", "text": "The purpose of the Vaccine is not to stop you from catching COVID 19. The vaccine introduces the immune system to an inactivated form of the SARS-CoV-2 coronavirus or a small part of it. This then equips the body with the ability to fight the virus better in case you get it. https://t.co/Cz9OU6Zi7P"}, {"id": "1354844652520792071", "text": "The Moderna mRNA COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2.\nResearchers analysed blood samples from vaccinated people and monkeys- Both contained neutralising antibodies against the virus. \nPT1/2\n#COVID19vaccines #biotech https://t.co/ET1maJznot"}, {"id": "1340189698107518976", "text": "@KhandaniM Pfizer vaccine introduces viral surface protein which is constant accross SARS COV 2 variants into the body. Body builds antibodies against this protein, not any virus. These antibodies instructs macrophages &amp; T-Cells to attack &amp; destroy any COVID-19 v variant at infection point"}, {"id": "1374368989581778945", "text": "@DelthiaRicks \" Pfizer and BioNTech\u2019s COVID-19 vaccine is an mRNA vaccine, which does not use the live virus but rather a small portion of the viral sequence of the SARS-CoV-2 virus to instruct the body to produce the spike protein displayed on the surface of the virus.\""}, {"id": "1353354819315126273", "text": "Pfizer and BioNTech Publish Results of Study Showing COVID-19 Vaccine Elicits Antibodies that Neutralize Pseudovirus Bearing the SARS-CoV-2 U.K. Strain Spike Protein in Cell Culture | Pfizer https://t.co/YXcSnjLt8C"}, {"id": "1400821856362401792", "text": "Pfizer-BioNTech's covid-19 vaccine elicits lower levels of antibodies against the SARS-CoV-2\u00a0Delta variant\u00a0(B.1.617.2), first discovered in India, in comparison to other variants, said a research published in\u00a0Lancet\u00a0journal.\n https://t.co/IaCMX81X3b"}, {"id": "1367252963190665219", "text": "New research from UNC-Chapel Hill suggests that those who have previously experienced a SARS-CoV-2 infection develop a significant antibody response to the first dose of mRNA-based COVID-19 vaccine.\nhttps://t.co/B4vR1KUQ0w"}, {"id": "1375949502461394946", "text": "Mechanism of a COVID-19 nanoparticle vaccine candidate that elicits a broadly neutralizing antibody response to SARS-CoV-2 variants https://t.co/nc1L0uvtlI #bioRxiv"}, {"id": "1395428608349548550", "text": "JCI - Efficient maternal to neonatal transfer of antibodies against SARS-CoV-2 and BNT162b2 mRNA COVID-19 vaccine https://t.co/vIBcpPaKFZ"}], "summary": "The COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2. Pfizer-BioNTech's COVID-19 vaccine use small portion of the viral sequence of the SARS-CoV-2 virus to equip the body with the ability to fight the virus better in case you get it."} ### Data Fields {'id': text of paper abstract \ 'docs': document id \ [ 'id': id of text \ 'text': text data \ ] 'summary': summary text } ### Data Splits The data is split into a training, validation and test. | train | validation | test | |------:|-----------:|-----:| | 50 | 10 | 5 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{lu2020multi, title={Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles}, author={Arka Das, India}, journal={arXiv preprint arXiv:2010.14235}, year={2022} } ``` ### Contributions Thanks to [@arka0821] (https://github.com/arka0821/multi_document_summarization) for adding this dataset.
asdAD222
null
null
null
false
1
false
asdAD222/segment_T
2022-04-19T15:56:02.000Z
null
false
f0bf0ee2765b1df5e8dc9bc0f3bd661fd880fda5
[]
[]
https://huggingface.co/datasets/asdAD222/segment_T/resolve/main/README.md
danrambado
null
null
null
false
2
false
danrambado/SODA
2022-04-19T18:57:01.000Z
null
false
2a0ac924abdb53e4ba4a7c1ae943fc8ef3b296d3
[]
[ "license:mit" ]
https://huggingface.co/datasets/danrambado/SODA/resolve/main/README.md
--- license: mit ---
bookbot
null
null
null
false
3
false
bookbot/id_word2phoneme
2022-10-24T17:47:22.000Z
null
false
10ff57bba0a3716b15da6ce134a094f85c4632fc
[]
[ "annotations_creators:no-annotation", "language_creators:found", "language:id", "language:ms", "source_datasets:original", "task_categories:text2text-generation", "task_ids:g2p" ]
https://huggingface.co/datasets/bookbot/id_word2phoneme/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - found language: - id - ms source_datasets: - original task_categories: - text2text-generation task_ids: - g2p paperswithcode_id: null pretty_name: ID Word2Phoneme --- # Dataset Card for ID Word2Phoneme ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Github](https://github.com/open-dict-data/ipa-dict/blob/master/data/ma.txt) - **Repository:** [Github](https://github.com/open-dict-data/ipa-dict/blob/master/data/ma.txt) - **Point of Contact:** - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** ### Dataset Summary Originally a [Malay/Indonesian Lexicon](https://github.com/open-dict-data/ipa-dict/blob/master/data/ma.txt) retrieved from [ipa-dict](https://github.com/open-dict-data/ipa-dict). We removed the accented letters (because Indonesian graphemes do not use accents), separated homographs, and removed backslashes in phonemes -- resulting in a word-to-phoneme dataset. ### Languages - Indonesian - Malay ## Dataset Structure ### Data Instances | word | phoneme | | ----- | ------- | | aba | aba | | ab | ab | | ab’ad | abʔad | | abad | abad | | abadi | abadi | | ... | ... | ### Data Fields - `word`: Word (grapheme) as a string. - `phoneme`: Phoneme (IPA) as a string. ### Data Splits | train | | ----- | | 27553 | ## Additional Information ### Citation Information ``` @misc{open-dict-data-no-date, author = {{Open-Dict-Data}}, title = {{GitHub - open-dict-data/ipa-dict: Monolingual wordlists with pronunciation information in IPA}}, url = {https://github.com/open-dict-data/ipa-dict}, } ```
Khalsuu
null
# @inproceedings{commonvoice:2020, # author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, # title = {Common Voice: A Massively-Multilingual Speech Corpus}, # booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, # pages = {4211--4215}, # year = 2020 # } #
Magic Hub's initiative to help teach machines how real people speak. They wanted to provide structured data that will help enthusiasts and researchers to spend more time on training models rather than cleaning and structuring data.
false
2
false
Khalsuu/filipino_dataset_script
2022-04-28T15:25:10.000Z
null
false
6128e88e317eb20bb3c429b09db8eecd915bbd41
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/Khalsuu/filipino_dataset_script/resolve/main/README.md
--- license: apache-2.0 ---
csebuetnlp
null
null
null
false
4
false
csebuetnlp/CrossSum
2022-10-06T12:11:21.000Z
null
false
d22e0fe3d6bd2f72234f82c454afcdb20e106a3c
[]
[ "arxiv:2112.08804", "task_ids:summarization", "language:am", "language:ar", "language:az", "language:bn", "language:my", "language:zh", "language:en", "language:fr", "language:gu", "language:ha", "language:hi", "language:ig", "language:id", "language:ja", "language:rn", "language:k...
https://huggingface.co/datasets/csebuetnlp/CrossSum/resolve/main/README.md
--- task_categories: - conditional-text-generation task_ids: - summarization language: - am - ar - az - bn - my - zh - en - fr - gu - ha - hi - ig - id - ja - rn - ko - ky - mr - ne - om - ps - fa - pcm - pt - pa - ru - gd - sr - si - so - es - sw - ta - te - th - ti - tr - uk - ur - uz - vi - cy - yo size_categories: - 1M<n<10M license: - cc-by-nc-sa-4.0 multilinguality: - multilingual source_datasets: - original annotations_creators: - found language_creators: - found pretty_name: CrossSum --- # Dataset Card for "CrossSum" ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/csebuetnlp/CrossSum](https://github.com/csebuetnlp/CrossSum) - **Paper:** [CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs](https://arxiv.org/abs/2112.08804) - **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd) ### Dataset Summary We present CrossSum, a large-scale dataset comprising 1.70 million cross-lingual article summary samples in 1500+ language-pairs constituting 45 languages. We use the multilingual XL-Sum dataset and align identical articles written in different languages via crosslingual retrieval using a language-agnostic representation model. ### Supported Tasks and Leaderboards [More information needed](https://github.com/csebuetnlp/CrossSum) ### Languages - `amharic` - `arabic` - `azerbaijani` - `bengali` - `burmese` - `chinese_simplified` - `chinese_traditional` - `english` - `french` - `gujarati` - `hausa` - `hindi` - `igbo` - `indonesian` - `japanese` - `kirundi` - `korean` - `kyrgyz` - `marathi` - `nepali` - `oromo` - `pashto` - `persian` - `pidgin` - `portuguese` - `punjabi` - `russian` - `scottish_gaelic` - `serbian_cyrillic` - `serbian_latin` - `sinhala` - `somali` - `spanish` - `swahili` - `tamil` - `telugu` - `thai` - `tigrinya` - `turkish` - `ukrainian` - `urdu` - `uzbek` - `vietnamese` - `welsh` - `yoruba` ## Loading the dataset ```python from datasets import load_dataset # for available language names, see above src_lang = "english" tgt_lang = "bengali" ds = load_dataset(f"csebuetnlp/CrossSum", "{}-{}".format(src_lang, tgt_lang)) ``` ## Dataset Structure ### Data Instances One example from the `English` dataset is given below in JSON format. ``` { "source_url": "https://www.bbc.com/japanese/53074000", "target_url": "https://www.bbc.com/bengali/news-53064712", "summary": "বিজ্ঞানীরা বলছেন ডেক্সামেথাসোন নামে সস্তা ও সহজলভ্য একটি ওষুধ করোনাভাইরাসে গুরুতর অসুস্থ রোগীদের জীবন রক্ষা করতে সাহায্য করবে।", "text": "ミシェル・ロバーツ、BBCニュースオンライン健康担当編集長 英オックスフォード大学の研究チームによると、低用量のデキサメタゾンは新型ウイルスとの戦いで画期的な突破口になる。 新型コロナウイルスに対し、様々な既存の治療法の効果を試す世界的規模の臨床試験の一貫として、デキサメタゾンが試された。 その結果、人工呼吸器を必要とする重症患者の致死率が3割下がり、酸素供給を必要とする患者の場合は2割下がった。 新型ウイルスのパンデミック(世界的流行)の初期からイギリスでデキサメタゾンを治療に使用していた場合、最大5000人の命が救えたはずだと研究者たちは言う。 さらに、新型コロナウイルスによる感染症「COVID-19」の患者が多く出ている貧しい国にとっても、安価なデキサメタゾンを使う治療は大いに役立つと期待される。 重症者の致死率が大幅に下がる イギリス政府は20万人分の投与量を備蓄しており、国民医療制度の国民保健サービス(NHS)で患者への使用を開始する方針を示した。 ボリス・ジョンソン英首相は「イギリス科学界の素晴らしい成果」を歓迎し、「たとえ感染の第2波が来ても備蓄が足りるよう、数を確保するための措置をとった」と述べた。 イングランド首席医務官クリス・ウィッティー教授は、「COVID-19にとってこれまでで一番重要な臨床試験結果だ。手に入りやすく安全でなじみのある薬によって、酸素供給や人工呼吸器が必要な人の致死率が大幅に下がった。(中略)この発見が世界中で人命を救う」と評価した。 <関連記事> 新型コロナウイルスに20人が感染した場合、19人は入院しないまま回復する。入院する人もほとんどは回復するものの、重症化して酸素供給や人工呼吸器を必要とする人もいる。 デキサメタゾンはこうした重症患者の治療に効果があるもよう。 新型ウイルスに感染した患者の体内では、ウイルスと戦う免疫系が暴走することがある。その免疫系の過剰反応による体の損傷を、デキサメタゾンが緩和するものとみられる。 「サイトカイン・ストーム」と呼ばれる免疫系の過剰反応が、患者の命を奪うこともある。 デキサメタゾンはすでに抗炎症剤として、ぜんそくや皮膚炎など様々な症状の治療に使われている。 初めて致死率を下げる薬 オックスフォード大学が主導する臨床試験は、約2000人の入院患者にデキサメタゾンを投与。それ以外の4000人以上の患者と容体を比較した。 人工呼吸器を使用する患者については、死亡リスクが40%から28%に下がった。 酸素供給する患者は、死亡リスクが25%から20%に下がった。 研究チームのピーター・ホービー教授は、「今のところ、致死率を実際に下げる結果が出たのは、この薬だけだ。しかも、致死率をかなり下げる。画期的な突破口だ」と話した。 研究を主導するマーティン・ランドレイ教授によると、人工呼吸器を使う患者の8人に1人、ならびに酸素供給治療を受ける患者の20-25人に1人が、デキサメタゾンで救えることが分かったという。 「これはきわめて明確なメリットだ」と教授は言う。 「最大10日間、デキサメタゾンを投与するという治療法で、費用は患者1人あたり1日約5ポンド(約670円)。つまり、35ポンド(約4700円)で人ひとりの命が救える」 「しかもこれは、世界中で手に入る薬だ」 状況が許す限り、新型コロナウイルスで入院中の患者にはただちに投与を開始すべきだと、ランドレイ教授は促した。 ただし、自宅で自己治療するために薬局に買いに行くべきではないと言う。 デキサメタゾンは、呼吸補助を必要としない軽症の患者には効果がないもよう。 3月に始動した新型コロナウイルス治療薬の無作為化臨床試験「リカバリー・トライアル」は、抗マラリア薬「ヒドロキシクロロキン」も調べたものの、心臓疾患や致死率の悪化につながるという懸念から、ヒドロキシクロロキンについては試験を中止した。 一方で、感染者の回復にかかる時間を短縮するとみられるレムデシビルは、すでにNHSの保険対象になり治療現場で使われている。 <解説> ファーガス・ウォルシュBBC健康担当編集委員 COVID-19の死者を減らすと初めて立証された薬は、高価な新しい薬ではなく、古くからずっと使われてきた、きわめて安いステロイド剤だった。 世界中の患者が直ちにその恩恵を受けることになるので、これは歓迎すべき発見だ。 この臨床試験の最新成果がこれほど急いで発表されたのは、そのためだ。とてつもない影響を世界中にもたらすので。 デキサメタゾンは1960年代初めから、関節リウマチやぜんそくなど、幅広い症状の治療に使われてきた。 これまでは、人工呼吸器を必要とするCOVID-19患者の半数が亡くなってきた。その致死率を3割減らすというのは、絶大な効果だ。 集中治療室では点滴で投与する。もう少し軽症な患者には、錠剤で与える。 これまでのところ、COVID-19患者に効果があると証明された薬は、エボラ治療薬のレムデシビルだけだった。 レムデシビルは症状の回復期間を15日から11日に短縮する。しかし、致死率を下げると言えるだけの証拠は出ていなかった。 デキサメタゾンと異なり、レムデシビルは数の少ない新薬で、薬価もまだ公表されていない。" } ``` ### Data Fields - 'source_url': A string representing the source article URL. - 'target_url': A string representing the target article URL. - 'summary': A string containing the article summary. - 'text' : A string containing the article text. ### Data Splits No. of total examples for each language pair are as follows: Language (ISO 639-1-Code) | am | ar | az | bn | my | zh-CN | zh-TW | en | fr | gu | ha | hi | ig | id | ja | rn | ko | ky | mr | np | om | ps | fa | pcm | pt | pa | ru | gd | sr | sr | si | so | es | sw | ta | te | th | ti | tr | uk | ur | uz | vi | cy | yo ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- am | -- | 667 | 100 | 272 | 95 | 179 | 167 | 1456 | 358 | 173 | 221 | 377 | 26 | 494 | 264 | 423 | 244 | 92 | 221 | 301 | 21 | 192 | 431 | 209 | 307 | 189 | 347 | 0 | 357 | 365 | 62 | 309 | 351 | 378 | 390 | 329 | 124 | 131 | 435 | 345 | 409 | 41 | 285 | 1 | 67 ar | 667 | -- | 787 | 804 | 652 | 2968 | 2843 | 9653 | 989 | 475 | 747 | 3665 | 86 | 6084 | 1188 | 876 | 707 | 299 | 559 | 854 | 9 | 2161 | 4186 | 436 | 2539 | 547 | 5564 | 1 | 1109 | 1145 | 315 | 1049 | 3654 | 1186 | 1311 | 877 | 367 | 27 | 4147 | 3457 | 4935 | 388 | 2666 | 38 | 141 az | 100 | 787 | -- | 277 | 84 | 371 | 334 | 1317 | 208 | 192 | 126 | 748 | 28 | 1111 | 231 | 188 | 155 | 221 | 194 | 242 | 1 | 252 | 817 | 91 | 678 | 190 | 2238 | 4 | 289 | 283 | 124 | 367 | 704 | 539 | 515 | 245 | 140 | 2 | 1495 | 1383 | 966 | 199 | 725 | 30 | 42 bn | 272 | 804 | 277 | -- | 139 | 318 | 284 | 1549 | 317 | 559 | 231 | 1396 | 35 | 1076 | 342 | 298 | 352 | 154 | 586 | 668 | 2 | 300 | 790 | 135 | 764 | 580 | 838 | 0 | 562 | 564 | 151 | 412 | 701 | 471 | 919 | 793 | 245 | 6 | 860 | 688 | 1382 | 98 | 527 | 37 | 61 my | 95 | 652 | 84 | 139 | -- | 356 | 314 | 685 | 90 | 96 | 74 | 528 | 12 | 761 | 144 | 100 | 112 | 58 | 89 | 152 | 1 | 234 | 426 | 39 | 230 | 86 | 535 | 0 | 115 | 123 | 87 | 79 | 431 | 86 | 185 | 147 | 71 | 4 | 449 | 350 | 591 | 62 | 447 | 4 | 12 zh-CN | 179 | 2968 | 371 | 318 | 356 | -- | 47101 | 4975 | 348 | 201 | 159 | 1379 | 38 | 2851 | 1017 | 240 | 412 | 139 | 240 | 275 | 14 | 559 | 1111 | 149 | 1371 | 250 | 2572 | 2 | 504 | 530 | 166 | 323 | 2002 | 412 | 511 | 353 | 269 | 11 | 1511 | 1619 | 1651 | 176 | 1858 | 33 | 39 zh-TW | 167 | 2843 | 334 | 284 | 314 | 47101 | -- | 4884 | 331 | 174 | 150 | 1213 | 35 | 2588 | 953 | 209 | 382 | 131 | 213 | 252 | 16 | 501 | 967 | 141 | 1271 | 226 | 2286 | 1 | 453 | 494 | 150 | 302 | 1873 | 383 | 465 | 335 | 250 | 12 | 1294 | 1464 | 1444 | 158 | 1663 | 31 | 38 en | 1456 | 9653 | 1317 | 1549 | 685 | 4975 | 4884 | -- | 1889 | 978 | 913 | 4728 | 144 | 10040 | 3040 | 1878 | 1673 | 490 | 1181 | 1614 | 38 | 1522 | 4680 | 1074 | 4744 | 1330 | 9080 | 128 | 3760 | 3809 | 532 | 2141 | 6910 | 2701 | 3156 | 2121 | 1020 | 58 | 5676 | 6562 | 6320 | 450 | 4574 | 2655 | 229 fr | 358 | 989 | 208 | 317 | 90 | 348 | 331 | 1889 | -- | 242 | 477 | 616 | 106 | 1018 | 274 | 735 | 264 | 124 | 241 | 323 | 4 | 196 | 602 | 439 | 921 | 247 | 849 | 2 | 555 | 569 | 98 | 502 | 990 | 872 | 425 | 380 | 185 | 10 | 829 | 721 | 766 | 76 | 438 | 40 | 159 gu | 173 | 475 | 192 | 559 | 96 | 201 | 174 | 978 | 242 | -- | 147 | 5170 | 34 | 710 | 228 | 183 | 268 | 106 | 2091 | 561 | 1 | 246 | 522 | 101 | 529 | 2210 | 582 | 0 | 331 | 345 | 125 | 261 | 540 | 300 | 1762 | 2066 | 164 | 5 | 631 | 508 | 1619 | 80 | 450 | 21 | 54 ha | 221 | 747 | 126 | 231 | 74 | 159 | 150 | 913 | 477 | 147 | -- | 460 | 202 | 901 | 157 | 485 | 135 | 61 | 159 | 239 | 5 | 229 | 487 | 529 | 375 | 157 | 525 | 1 | 258 | 258 | 49 | 391 | 463 | 568 | 299 | 260 | 87 | 9 | 519 | 400 | 526 | 59 | 352 | 30 | 362 hi | 377 | 3665 | 748 | 1396 | 528 | 1379 | 1213 | 4728 | 616 | 5170 | 460 | -- | 65 | 5627 | 623 | 489 | 520 | 234 | 3831 | 1357 | 4 | 1519 | 5351 | 192 | 6563 | 4052 | 4622 | 1 | 809 | 807 | 449 | 747 | 2931 | 893 | 3711 | 3762 | 378 | 7 | 3694 | 3935 | 15666 | 352 | 3738 | 77 | 79 ig | 26 | 86 | 28 | 35 | 12 | 38 | 35 | 144 | 106 | 34 | 202 | 65 | -- | 113 | 24 | 107 | 32 | 16 | 51 | 36 | 3 | 11 | 49 | 255 | 61 | 39 | 79 | 0 | 51 | 51 | 13 | 77 | 91 | 151 | 52 | 54 | 18 | 5 | 91 | 83 | 61 | 15 | 65 | 6 | 296 id | 494 | 6084 | 1111 | 1076 | 761 | 2851 | 2588 | 10040 | 1018 | 710 | 901 | 5627 | 113 | -- | 1274 | 994 | 774 | 347 | 745 | 1104 | 8 | 1430 | 3892 | 367 | 4409 | 725 | 7588 | 7 | 1387 | 1379 | 470 | 1312 | 4547 | 1873 | 1886 | 1131 | 599 | 9 | 5663 | 4829 | 6476 | 432 | 4810 | 145 | 174 ja | 264 | 1188 | 231 | 342 | 144 | 1017 | 953 | 3040 | 274 | 228 | 157 | 623 | 24 | 1274 | -- | 372 | 654 | 140 | 302 | 424 | 2 | 266 | 1014 | 152 | 706 | 269 | 1517 | 2 | 550 | 571 | 109 | 387 | 950 | 425 | 641 | 425 | 305 | 5 | 1242 | 1013 | 797 | 49 | 908 | 25 | 33 rn | 423 | 876 | 188 | 298 | 100 | 240 | 209 | 1878 | 735 | 183 | 485 | 489 | 107 | 994 | 372 | -- | 283 | 106 | 242 | 369 | 18 | 228 | 684 | 398 | 526 | 206 | 711 | 0 | 443 | 450 | 77 | 584 | 607 | 1186 | 521 | 363 | 149 | 13 | 724 | 610 | 617 | 59 | 631 | 20 | 180 ko | 244 | 707 | 155 | 352 | 112 | 412 | 382 | 1673 | 264 | 268 | 135 | 520 | 32 | 774 | 654 | 283 | -- | 99 | 319 | 445 | 1 | 150 | 596 | 130 | 587 | 264 | 649 | 0 | 522 | 543 | 81 | 234 | 613 | 324 | 541 | 452 | 197 | 5 | 680 | 616 | 532 | 54 | 530 | 12 | 45 ky | 92 | 299 | 221 | 154 | 58 | 139 | 131 | 490 | 124 | 106 | 61 | 234 | 16 | 347 | 140 | 106 | 99 | -- | 107 | 167 | 4 | 102 | 252 | 59 | 251 | 118 | 1013 | 1 | 206 | 211 | 45 | 145 | 279 | 150 | 206 | 174 | 109 | 3 | 346 | 508 | 270 | 113 | 201 | 12 | 23 mr | 221 | 559 | 194 | 586 | 89 | 240 | 213 | 1181 | 241 | 2091 | 159 | 3831 | 51 | 745 | 302 | 242 | 319 | 107 | -- | 630 | 1 | 232 | 608 | 138 | 524 | 1797 | 675 | 0 | 419 | 436 | 129 | 270 | 603 | 332 | 1776 | 1886 | 196 | 11 | 706 | 596 | 1395 | 79 | 473 | 16 | 48 np | 301 | 854 | 242 | 668 | 152 | 275 | 252 | 1614 | 323 | 561 | 239 | 1357 | 36 | 1104 | 424 | 369 | 445 | 167 | 630 | -- | 1 | 303 | 916 | 134 | 706 | 545 | 849 | 2 | 553 | 538 | 164 | 420 | 687 | 513 | 994 | 741 | 217 | 7 | 930 | 741 | 1156 | 84 | 719 | 39 | 65 om | 21 | 9 | 1 | 2 | 1 | 14 | 16 | 38 | 4 | 1 | 5 | 4 | 3 | 8 | 2 | 18 | 1 | 4 | 1 | 1 | -- | 2 | 3 | 11 | 4 | 6 | 8 | 0 | 2 | 3 | 0 | 6 | 7 | 5 | 2 | 2 | 1 | 103 | 5 | 10 | 1 | 4 | 2 | 0 | 7 ps | 192 | 2161 | 252 | 300 | 234 | 559 | 501 | 1522 | 196 | 246 | 229 | 1519 | 11 | 1430 | 266 | 228 | 150 | 102 | 232 | 303 | 2 | -- | 2815 | 94 | 594 | 249 | 1246 | 0 | 235 | 242 | 156 | 304 | 766 | 314 | 441 | 314 | 92 | 8 | 1049 | 818 | 2833 | 156 | 657 | 7 | 32 fa | 431 | 4186 | 817 | 790 | 426 | 1111 | 967 | 4680 | 602 | 522 | 487 | 5351 | 49 | 3892 | 1014 | 684 | 596 | 252 | 608 | 916 | 3 | 2815 | -- | 186 | 5512 | 541 | 4328 | 0 | 1028 | 1023 | 276 | 812 | 2512 | 1002 | 1250 | 797 | 364 | 8 | 3695 | 3567 | 6752 | 313 | 3190 | 66 | 74 pcm | 209 | 436 | 91 | 135 | 39 | 149 | 141 | 1074 | 439 | 101 | 529 | 192 | 255 | 367 | 152 | 398 | 130 | 59 | 138 | 134 | 11 | 94 | 186 | -- | 227 | 112 | 322 | 0 | 234 | 246 | 28 | 219 | 314 | 436 | 232 | 162 | 85 | 28 | 287 | 280 | 232 | 18 | 170 | 9 | 462 pt | 307 | 2539 | 678 | 764 | 230 | 1371 | 1271 | 4744 | 921 | 529 | 375 | 6563 | 61 | 4409 | 706 | 526 | 587 | 251 | 524 | 706 | 4 | 594 | 5512 | 227 | -- | 579 | 4452 | 7 | 1371 | 1341 | 231 | 602 | 7112 | 983 | 1042 | 820 | 468 | 3 | 3483 | 4421 | 6759 | 186 | 3754 | 110 | 97 pa | 189 | 547 | 190 | 580 | 86 | 250 | 226 | 1330 | 247 | 2210 | 157 | 4052 | 39 | 725 | 269 | 206 | 264 | 118 | 1797 | 545 | 6 | 249 | 541 | 112 | 579 | -- | 629 | 0 | 410 | 404 | 128 | 283 | 585 | 357 | 1726 | 1892 | 200 | 10 | 643 | 570 | 1515 | 73 | 431 | 16 | 44 ru | 347 | 5564 | 2238 | 838 | 535 | 2572 | 2286 | 9080 | 849 | 582 | 525 | 4622 | 79 | 7588 | 1517 | 711 | 649 | 1013 | 675 | 849 | 8 | 1246 | 4328 | 322 | 4452 | 629 | -- | 5 | 1495 | 1460 | 373 | 1166 | 4864 | 1672 | 1628 | 892 | 595 | 7 | 6223 | 22241 | 5309 | 809 | 3963 | 134 | 125 gd | 0 | 1 | 4 | 0 | 0 | 2 | 1 | 128 | 2 | 0 | 1 | 1 | 0 | 7 | 2 | 0 | 0 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 7 | 0 | 5 | -- | 2 | 3 | 2 | 1 | 3 | 1 | 0 | 0 | 1 | 0 | 6 | 5 | 2 | 1 | 3 | 36 | 2 sr | 357 | 1109 | 289 | 562 | 115 | 504 | 453 | 3760 | 555 | 331 | 258 | 809 | 51 | 1387 | 550 | 443 | 522 | 206 | 419 | 553 | 2 | 235 | 1028 | 234 | 1371 | 410 | 1495 | 2 | -- | 9041 | 127 | 377 | 1235 | 574 | 761 | 691 | 340 | 6 | 1247 | 1512 | 1021 | 109 | 685 | 42 | 69 sr | 365 | 1145 | 283 | 564 | 123 | 530 | 494 | 3809 | 569 | 345 | 258 | 807 | 51 | 1379 | 571 | 450 | 543 | 211 | 436 | 538 | 3 | 242 | 1023 | 246 | 1341 | 404 | 1460 | 3 | 9041 | -- | 137 | 382 | 1260 | 568 | 775 | 699 | 347 | 10 | 1229 | 1498 | 1009 | 112 | 639 | 45 | 79 si | 62 | 315 | 124 | 151 | 87 | 166 | 150 | 532 | 98 | 125 | 49 | 449 | 13 | 470 | 109 | 77 | 81 | 45 | 129 | 164 | 0 | 156 | 276 | 28 | 231 | 128 | 373 | 2 | 127 | 137 | -- | 137 | 260 | 189 | 348 | 173 | 69 | 7 | 301 | 306 | 510 | 38 | 216 | 5 | 15 so | 309 | 1049 | 367 | 412 | 79 | 323 | 302 | 2141 | 502 | 261 | 391 | 747 | 77 | 1312 | 387 | 584 | 234 | 145 | 270 | 420 | 6 | 304 | 812 | 219 | 602 | 283 | 1166 | 1 | 377 | 382 | 137 | -- | 689 | 1020 | 723 | 384 | 178 | 19 | 968 | 875 | 1000 | 75 | 724 | 20 | 116 es | 351 | 3654 | 704 | 701 | 431 | 2002 | 1873 | 6910 | 990 | 540 | 463 | 2931 | 91 | 4547 | 950 | 607 | 613 | 279 | 603 | 687 | 7 | 766 | 2512 | 314 | 7112 | 585 | 4864 | 3 | 1235 | 1260 | 260 | 689 | -- | 1047 | 1073 | 827 | 469 | 10 | 3645 | 3130 | 3060 | 290 | 2330 | 59 | 133 sw | 378 | 1186 | 539 | 471 | 86 | 412 | 383 | 2701 | 872 | 300 | 568 | 893 | 151 | 1873 | 425 | 1186 | 324 | 150 | 332 | 513 | 5 | 314 | 1002 | 436 | 983 | 357 | 1672 | 1 | 574 | 568 | 189 | 1020 | 1047 | -- | 929 | 492 | 261 | 10 | 1348 | 1309 | 1253 | 90 | 936 | 37 | 219 ta | 390 | 1311 | 515 | 919 | 185 | 511 | 465 | 3156 | 425 | 1762 | 299 | 3711 | 52 | 1886 | 641 | 521 | 541 | 206 | 1776 | 994 | 2 | 441 | 1250 | 232 | 1042 | 1726 | 1628 | 0 | 761 | 775 | 348 | 723 | 1073 | 929 | -- | 2278 | 400 | 14 | 1486 | 1423 | 2404 | 134 | 1092 | 32 | 68 te | 329 | 877 | 245 | 793 | 147 | 353 | 335 | 2121 | 380 | 2066 | 260 | 3762 | 54 | 1131 | 425 | 363 | 452 | 174 | 1886 | 741 | 2 | 314 | 797 | 162 | 820 | 1892 | 892 | 0 | 691 | 699 | 173 | 384 | 827 | 492 | 2278 | -- | 306 | 11 | 893 | 832 | 1748 | 107 | 644 | 21 | 61 th | 124 | 367 | 140 | 245 | 71 | 269 | 250 | 1020 | 185 | 164 | 87 | 378 | 18 | 599 | 305 | 149 | 197 | 109 | 196 | 217 | 1 | 92 | 364 | 85 | 468 | 200 | 595 | 1 | 340 | 347 | 69 | 178 | 469 | 261 | 400 | 306 | -- | 5 | 477 | 480 | 414 | 37 | 357 | 10 | 26 ti | 131 | 27 | 2 | 6 | 4 | 11 | 12 | 58 | 10 | 5 | 9 | 7 | 5 | 9 | 5 | 13 | 5 | 3 | 11 | 7 | 103 | 8 | 8 | 28 | 3 | 10 | 7 | 0 | 6 | 10 | 7 | 19 | 10 | 10 | 14 | 11 | 5 | -- | 8 | 8 | 4 | 2 | 5 | 0 | 6 tr | 435 | 4147 | 1495 | 860 | 449 | 1511 | 1294 | 5676 | 829 | 631 | 519 | 3694 | 91 | 5663 | 1242 | 724 | 680 | 346 | 706 | 930 | 5 | 1049 | 3695 | 287 | 3483 | 643 | 6223 | 6 | 1247 | 1229 | 301 | 968 | 3645 | 1348 | 1486 | 893 | 477 | 8 | -- | 4108 | 4340 | 370 | 2981 | 126 | 130 uk | 345 | 3457 | 1383 | 688 | 350 | 1619 | 1464 | 6562 | 721 | 508 | 400 | 3935 | 83 | 4829 | 1013 | 610 | 616 | 508 | 596 | 741 | 10 | 818 | 3567 | 280 | 4421 | 570 | 22241 | 5 | 1512 | 1498 | 306 | 875 | 3130 | 1309 | 1423 | 832 | 480 | 8 | 4108 | -- | 4290 | 442 | 3017 | 108 | 89 ur | 409 | 4935 | 966 | 1382 | 591 | 1651 | 1444 | 6320 | 766 | 1619 | 526 | 15666 | 61 | 6476 | 797 | 617 | 532 | 270 | 1395 | 1156 | 1 | 2833 | 6752 | 232 | 6759 | 1515 | 5309 | 2 | 1021 | 1009 | 510 | 1000 | 3060 | 1253 | 2404 | 1748 | 414 | 4 | 4340 | 4290 | -- | 389 | 3723 | 72 | 88 uz | 41 | 388 | 199 | 98 | 62 | 176 | 158 | 450 | 76 | 80 | 59 | 352 | 15 | 432 | 49 | 59 | 54 | 113 | 79 | 84 | 4 | 156 | 313 | 18 | 186 | 73 | 809 | 1 | 109 | 112 | 38 | 75 | 290 | 90 | 134 | 107 | 37 | 2 | 370 | 442 | 389 | -- | 257 | 10 | 15 vi | 285 | 2666 | 726 | 527 | 447 | 1858 | 1663 | 4575 | 438 | 450 | 352 | 3738 | 65 | 4810 | 908 | 631 | 530 | 201 | 473 | 719 | 2 | 657 | 3190 | 170 | 3755 | 431 | 3963 | 3 | 685 | 639 | 216 | 724 | 2330 | 936 | 1092 | 644 | 357 | 5 | 2982 | 3017 | 3723 | 257 | -- | 106 | 76 cy | 1 | 38 | 30 | 37 | 4 | 33 | 31 | 2655 | 40 | 21 | 30 | 77 | 6 | 145 | 25 | 20 | 12 | 12 | 16 | 39 | 0 | 7 | 66 | 9 | 110 | 16 | 134 | 36 | 42 | 45 | 5 | 20 | 59 | 37 | 32 | 21 | 10 | 0 | 126 | 108 | 72 | 10 | 106 | -- | 8 yo | 67 | 141 | 42 | 61 | 12 | 39 | 38 | 229 | 159 | 54 | 362 | 79 | 296 | 174 | 33 | 180 | 45 | 23 | 48 | 65 | 7 | 32 | 74 | 462 | 97 | 44 | 125 | 2 | 69 | 79 | 15 | 116 | 133 | 219 | 68 | 61 | 26 | 6 | 130 | 89 | 88 | 15 | 76 | 8 | -- ## Dataset Creation ### Curation Rationale [More information needed](https://github.com/csebuetnlp/CrossSum) ### Source Data [BBC News](https://www.bbc.co.uk/ws/languages) #### Initial Data Collection and Normalization [Detailed in the paper](https://arxiv.org/abs/2112.08804/) #### Who are the source language producers? [Detailed in the paper](https://arxiv.org/abs/2112.08804/) ### Annotations [Detailed in the paper](https://arxiv.org/abs/2112.08804/) #### Annotation process [Detailed in the paper](https://arxiv.org/abs/2112.08804/) #### Who are the annotators? [Detailed in the paper](https://arxiv.org/abs/2112.08804/) ### Personal and Sensitive Information [More information needed](https://github.com/csebuetnlp/CrossSum) ## Considerations for Using the Data ### Social Impact of Dataset [More information needed](https://github.com/csebuetnlp/CrossSum) ### Discussion of Biases [More information needed](https://github.com/csebuetnlp/CrossSum) ### Other Known Limitations [More information needed](https://github.com/csebuetnlp/CrossSum) ## Additional Information ### Dataset Curators [More information needed](https://github.com/csebuetnlp/CrossSum) ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use any of the datasets, models or code modules, please cite the following paper: ``` @article{hasan2021crosssum, author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar}, title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs}, journal = {CoRR}, volume = {abs/2112.08804}, year = {2021}, url = {https://arxiv.org/abs/2112.08804}, eprinttype = {arXiv}, eprint = {2112.08804} } ``` ### Contributions Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
UmnatHCU
null
null
null
false
3
false
UmnatHCU/test
2022-04-20T08:30:01.000Z
null
false
1436852d084f57851cb02ae13e16e4fa23c1e977
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/UmnatHCU/test/resolve/main/README.md
--- license: afl-3.0 ---
mteb
null
null
null
false
658
false
mteb/sts12-sts
2022-09-27T19:11:50.000Z
null
false
fdf84275bb8ce4b49c971d02e84dd1abc677a50f
[]
[ "language:en" ]
https://huggingface.co/datasets/mteb/sts12-sts/resolve/main/README.md
--- language: - en ---
mteb
null
null
null
false
358
false
mteb/sts13-sts
2022-09-27T19:12:02.000Z
null
false
1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
[]
[ "language:en" ]
https://huggingface.co/datasets/mteb/sts13-sts/resolve/main/README.md
--- language: - en ---
mteb
null
null
null
false
336
false
mteb/sts14-sts
2022-09-27T19:11:37.000Z
null
false
e2125984e7df8b7871f6ae9949cf6b6795e7c54b
[]
[ "language:en" ]
https://huggingface.co/datasets/mteb/sts14-sts/resolve/main/README.md
--- language: - en ---
mteb
null
null
null
false
502
false
mteb/sts15-sts
2022-09-27T19:12:14.000Z
null
false
1cd7298cac12a96a373b6a2f18738bb3e739a9b6
[]
[ "language:en" ]
https://huggingface.co/datasets/mteb/sts15-sts/resolve/main/README.md
--- language: - en ---
mteb
null
null
null
false
334
false
mteb/sts16-sts
2022-09-27T19:12:09.000Z
null
false
360a0b2dff98700d09e634a01e1cc1624d3e42cd
[]
[ "language:en" ]
https://huggingface.co/datasets/mteb/sts16-sts/resolve/main/README.md
--- language: - en ---
mwong
null
null
null
false
3
false
mwong/climatetext-claim-related-evaluation
2022-10-25T10:08:44.000Z
null
false
1f93eb3df343353f9b0d0f9bc724ab9473643bfe
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "task_categories:text-classification", "task_ids:fact-checking" ]
https://huggingface.co/datasets/mwong/climatetext-claim-related-evaluation/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gpl-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|climate_text task_categories: - text-classification task_ids: - fact-checking --- ### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related claim and evidence, predict if claim is related to evidence.
NbAiLab
null
@inproceedings{, title={}, author={}, booktitle={}, year={2022}, url={https://arxiv.org/abs/} }
This database was created by Nordic Language Technology for the development of automatic speech recognition and dictation in Norwegian. In this version, the organization of the data have been altered to improve the usefulness of the database. The acoustic databases described below were developed by the firm Nordisk språkteknologi holding AS (NST), which went bankrupt in 2003. In 2006, a consortium consisting of the University of Oslo, the University of Bergen, the Norwegian University of Science and Technology, the Norwegian Language Council and IBM bought the bankruptcy estate of NST, in order to ensure that the language resources developed by NST were preserved. In 2009, the Norwegian Ministry of Culture charged the National Library of Norway with the task of creating a Norwegian language bank, which they initiated in 2010. The resources from NST were transferred to the National Library in May 2011, and are now made available in Språkbanken, for the time being without any further modification. Språkbanken is open for feedback from users about how the resources can be improved, and we are also interested in improved versions of the databases that users wish to share with other users. Please send response and feedback to sprakbanken@nb.no.
false
274
false
NbAiLab/NST
2022-08-12T14:09:29.000Z
null
false
81dd00f3ce6d26dd7b103af91ef0013a535caacd
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/NbAiLab/NST/resolve/main/README.md
--- license: apache-2.0 ---
mwong
null
null
null
false
2
false
mwong/climatetext-evidence-related-evaluation
2022-10-25T10:08:46.000Z
null
false
72cac22487c265b0b27b424f561f0f3659c5746d
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "task_categories:text-classification", "task_ids:fact-checking" ]
https://huggingface.co/datasets/mwong/climatetext-evidence-related-evaluation/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gpl-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|climate_text task_categories: - text-classification task_ids: - fact-checking --- ### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related claim and evidence, predict if evidence is related to claim.
Peihao
null
null
null
false
2
false
Peihao/CURE-Pretrain
2022-04-21T16:07:25.000Z
null
false
3140c1105204085e7461bcb8fd2301e9d4be9611
[]
[ "license:lgpl" ]
https://huggingface.co/datasets/Peihao/CURE-Pretrain/resolve/main/README.md
--- license: lgpl ---
crisdev
null
null
null
false
3
false
crisdev/comentarios
2022-05-06T14:18:49.000Z
null
false
8dec0f04d38cb2d2a2b83a72ac88df63c4c4e6da
[]
[ "license:mit" ]
https://huggingface.co/datasets/crisdev/comentarios/resolve/main/README.md
--- license: mit ---
daniel-dona
null
null
null
false
2
false
daniel-dona/tfg-voice-2
2022-04-20T22:26:10.000Z
null
false
eec8b7881f5b1c5fe586b476fce67ba9f93fdcbe
[]
[ "license:cc-by-sa-3.0" ]
https://huggingface.co/datasets/daniel-dona/tfg-voice-2/resolve/main/README.md
--- license: cc-by-sa-3.0 ---
alisawuffles
null
null
null
false
39
false
alisawuffles/WANLI
2022-09-19T17:19:09.000Z
null
false
09d892a63b8549fda24ac77b3ecd3aeef8162792
[]
[ "arxiv:2201.05955", "annotations_creators:crowdsourced", "language_creators:other", "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "task_categories:text-classification", "task_ids:natural-language-inference" ]
https://huggingface.co/datasets/alisawuffles/WANLI/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - other language: - en license: - cc-by-4.0 multilinguality: - monolingual pretty_name: WANLI size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - natural-language-inference --- # Dataset Card for WANLI ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [WANLI homepage](https://alisawuffles.github.io/publication/wanli/) - **Repository:** [Github repo](https://github.com/alisawuffles/wanli) - **Paper:** [arXiv](https://arxiv.org/abs/2201.05955) - **Point of Contact:** [Alisa Liu](mailto:alisaliu@cs.washington.edu) ### Dataset Summary WANLI (**W**orker-**A**I Collaboration for **NLI**) is a collection of 108K English sentence pairs for the task of natural language inference (NLI). Each example is created by first identifying a "pocket" of examples in [MultiNLI (Williams et al., 2018)](https://cims.nyu.edu/~sbowman/multinli/) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern. The set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators. WANLI presents unique empirical strengths compared to existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is 4 times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI. ### Supported Tasks and Leaderboards The dataset can be used to train a model for natural language inference, which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Success on this task is typically measured by achieving a high accuracy. A RoBERTa-large model currently achieves 75.40%. Models trained on NLI are often adapted to other downstream tasks, and NLI data can be mixed with other sources of supervision. ### Languages The dataset consists of English examples generated by GPT-3 and revised by English-speaking crowdworkers located in the United States. ## Dataset Structure ### Data Instances Here is an example of an NLI example in `data/wanli/train.jsonl` or `data/wanli/test.jsonl`. ``` { "id": 225295, "premise": "It is a tribute to the skill of the coach that the team has been able to compete at the highest level.", "hypothesis": "The coach is a good coach.", "gold": "entailment", "genre": "generated", "pairID": "171408" } ``` - `id`: unique identifier for the example - `premise`: a piece of text - `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise - `gold`: one of `entailment`, `neutral`, and `contradiction` - `genre`: one of `generated` and `generated_revised`, depending on whether the example was revised by annotators - `pairID`: id of seed MNLI example, corresponding to those in `data/mnli/train.jsonl` We also release the raw annotations for each worker, which can be found in `data/wanli/anonymized_annotations.jsonl`. ``` "WorkerId": "EUJ", "id": 271560, "nearest_neighbors": [ 309783, 202988, 145310, 98030, 148759 ], "premise": "I don't know what I'd do without my cat. He is my only friend.", "hypothesis": "I would be alone.", "label": "neutral", "revised_premise": "I don't know what I'd do without my cat. He is my only friend.", "revised_hypothesis": "I would be alone without my cat.", "gold": "entailment", "revised": true ``` - `WorkerId`: a unique identification for each crowdworker (NOT the real worker ID from AMT) - `id`: id of generated example - `nearest_neighbors`: ordered ids of the group of MNLI nearest neighbors that were used as in-context examples, where the first one is seed ambiguous MNLI example. MNLI ids correspond to those in `mnli/train.jsonl`. - `premise`: GPT-3 generated premise - `hypothesis`: GPT-3 generated hypothesis - `label`: the shared label of the in-context examples, which is the "intended" label for this generation - `revised_premise`: premise after human review - `revised_hypothesis`: hypothesis after human review - `gold`: annotator-assigned gold label for the (potentially revised) example - `revised`: whether the example was revised ### Data Splits The dataset is randomly split into a *train* and *test* set. | | train | test | |-------------------------|------:|-----:| | Examples | 102885| 5000| ## Dataset Creation ### Curation Rationale A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. On the other hand, there has been remarkable progress in open-ended text generation based on massive language models. We create WANLI to demonstrate the effectiveness an approach that leverages the best of both worlds: a language model's ability to efficiently generate diverse examples, and a human's ability to revise the examples for quality and assign a gold label. ### Source Data #### Initial Data Collection and Normalization Our pipeline starts with an existing dataset, MultiNLI (Williams et al., 2018). We use dataset cartography from [Swayamdipta et al. (2020)](https://aclanthology.org/2020.emnlp-main.746/) to automatically identify pockets of examples that demonstrate challenging reasoning patterns rela081 tive to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to *generate new examples* likely to have the same pattern. We then automatically filter generations to keep those that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality. #### Who are the source language producers? The GPT-3 Curie model generated examples which were then revised and labeled by crowdworkers on Amazon Mechanical Turk. Workers were paid $0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target. ### Annotations #### Annotation process Given an unlabeled example, annotators are asked to optionally revise it for quality (while preserving the intended meaning as much as possible through minimal revisions), and then assign a label. Alternatively, if an example would require a great deal of revision to fix *or* if it could be perceived as offensive, they were asked to discard it. Details about instructions, guidelines, and instructional examples can be found in Appendix D of the paper. Crowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example. For examples that both annotators labeled without revision, annotators achieved a Cohen Kappa score of 0.60, indicating substantial agreement. #### Who are the annotators? Annotators were required to have a HIT approval rate of 98%, a total of 10,000 approved HITs, and be located in the United States. 300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62. ### Personal and Sensitive Information The dataset does not contain any personal information about the authors or the crowdworkers. ## Considerations for Using the Data ### Social Impact of Dataset This dataset was developed to explore the potential of worker-AI collaboration for dataset curation, train more robust NLI models, and provide more challenging evaluation of existing systems. ### Discussion of Biases Text generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language. To partially remedy this, we ask annotators to discard any examples that may be perceived as offensive. Nonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset. ## Additional Information ### Dataset Curators WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the [University of Washington](https://www.cs.washington.edu/) and [AI2](https://allenai.org/). ### Citation Information ``` @misc{liu-etal-2022-wanli, title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation", author = "Liu, Alisa and Swayamdipta, Swabha and Smith, Noah A. and Choi, Yejin", month = jan, year = "2022", url = "https://arxiv.org/pdf/2201.05955", } ```
billray110
null
null
null
false
8
false
billray110/corpus-of-diverse-styles
2022-10-22T00:52:53.000Z
null
false
7a13ba87386bd8c9083ff858944a5f516e43f939
[]
[ "arxiv:2010.05700", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "task_categories:text-classification" ]
https://huggingface.co/datasets/billray110/corpus-of-diverse-styles/resolve/main/README.md
--- annotations_creators: [] language_creators: - found language: [] license: [] multilinguality: - monolingual pretty_name: Corpus of Diverse Styles size_categories: - 10M<n<100M source_datasets: [] task_categories: - text-classification task_ids: [] --- # Dataset Card for Corpus of Diverse Styles ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) ## Disclaimer I am not the original author of the paper that presents the Corpus of Diverse Styles. I uploaded the dataset to HuggingFace as a convenience. ## Dataset Description - **Homepage:** http://style.cs.umass.edu/ - **Repository:** https://github.com/martiansideofthemoon/style-transfer-paraphrase - **Paper:** https://arxiv.org/abs/2010.05700 ### Dataset Summary A new benchmark dataset that contains 15M sentences from 11 diverse styles. To create CDS, we obtain data from existing academic research datasets and public APIs or online collections like Project Gutenberg. We choose styles that are easy for human readers to identify at a sentence level (e.g., Tweets or Biblical text). While prior benchmarks involve a transfer between two styles, CDS has 110 potential transfer directions. ### Citation Information ``` @inproceedings{style20, author={Kalpesh Krishna and John Wieting and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation}, } ```
Kateryna
null
null
null
false
2
false
Kateryna/eva_ru_forum_headlines
2022-04-21T02:17:55.000Z
null
false
eb798af6f91a5305eb0f18aeb15378cc3c91b421
[]
[]
https://huggingface.co/datasets/Kateryna/eva_ru_forum_headlines/resolve/main/README.md
The dataset is a mix of topics from 3 forums: "Hotline", "Kids Psychology and Development", "Everything Else". It contains topic name (Topic), start post (message) and post unique id (Message_Id).
juliensimon
null
null
null
false
2
false
juliensimon/autotrain-data-petfinder-demo
2022-04-21T08:31:01.000Z
null
false
5036e6e1a51e2dcd4f780574dfaf8f09e6e312d9
[]
[]
https://huggingface.co/datasets/juliensimon/autotrain-data-petfinder-demo/resolve/main/README.md
--- {} --- # AutoTrain Dataset for project: petfinder-demo ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project petfinder-demo. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_Type": 2, "feat_Name": "CHLOE, NAOMI,ZOEY", "feat_Age": 8, "feat_Breed1": 266, "feat_Breed2": 266, "feat_Gender": 3, "feat_Color1": 1, "feat_Color2": 6, "feat_Color3": 0, "feat_MaturitySize": 1, "feat_FurLength": 1, "feat_Vaccinated": 1, "feat_Dewormed": 1, "feat_Sterilized": 1, "feat_Health": 1, "feat_Quantity": 4, "feat_Fee": 0, "feat_State": 41326, "feat_RescuerID": "13733222f015ec6a0017c3c0527738ff", "feat_VideoAmt": 4, "feat_Description": "\u6709\u4eba\u53ef\u4ee5\u7d66\u5b69\u5b50\u5011\u4e00\u500b\u5bb6\u55ce\uff1f \u525b\u525b\u5728Setia Walk Puchong\u6551\u7684,\u5982\u679c\u4e0d\u6551\u4ed6\u4eec\u4f1a\u53d7\u98ce\u5439\u96e8\u6253\uff0c\u665a\u4e0a\u8fd8\u8981\u89c1\u9b3c\uff08\u9152\u9b3c\uff09 \u6709\u990a\u8c93\u7d93\u9a57\u8005\u512a\u5148.. Whatsapp.. CHLOE (BLACK)WIT[...]", "id": "06bfadf29", "feat_PhotoAmt": 4.0, "target": 4 }, { "feat_Type": 1, "feat_Name": "BLACK & WHITE", "feat_Age": 2, "feat_Breed1": 307, "feat_Breed2": 0, "feat_Gender": 3, "feat_Color1": 1, "feat_Color2": 7, "feat_Color3": 0, "feat_MaturitySize": 2, "feat_FurLength": 1, "feat_Vaccinated": 2, "feat_Dewormed": 1, "feat_Sterilized": 3, "feat_Health": 1, "feat_Quantity": 2, "feat_Fee": 0, "feat_State": 41326, "feat_RescuerID": "90191b06d602e8d45b4ce25dd22c6a3e", "feat_VideoAmt": 0, "feat_Description": "These 2 babies were found with their eyes still closed...possibly abandoned by someone or something [...]", "id": "a89af346d", "feat_PhotoAmt": 5.0, "target": 3 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_Type": "Value(dtype='int64', id=None)", "feat_Name": "Value(dtype='string', id=None)", "feat_Age": "Value(dtype='int64', id=None)", "feat_Breed1": "Value(dtype='int64', id=None)", "feat_Breed2": "Value(dtype='int64', id=None)", "feat_Gender": "Value(dtype='int64', id=None)", "feat_Color1": "Value(dtype='int64', id=None)", "feat_Color2": "Value(dtype='int64', id=None)", "feat_Color3": "Value(dtype='int64', id=None)", "feat_MaturitySize": "Value(dtype='int64', id=None)", "feat_FurLength": "Value(dtype='int64', id=None)", "feat_Vaccinated": "Value(dtype='int64', id=None)", "feat_Dewormed": "Value(dtype='int64', id=None)", "feat_Sterilized": "Value(dtype='int64', id=None)", "feat_Health": "Value(dtype='int64', id=None)", "feat_Quantity": "Value(dtype='int64', id=None)", "feat_Fee": "Value(dtype='int64', id=None)", "feat_State": "Value(dtype='int64', id=None)", "feat_RescuerID": "Value(dtype='string', id=None)", "feat_VideoAmt": "Value(dtype='int64', id=None)", "feat_Description": "Value(dtype='string', id=None)", "id": "Value(dtype='string', id=None)", "feat_PhotoAmt": "Value(dtype='float64', id=None)", "target": "ClassLabel(num_classes=5, names=['0', '1', '2', '3', '4'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 10977 | | valid | 2747 |
ylacombe
null
null
null
false
3
false
ylacombe/xsum_factuality
2022-10-24T17:48:23.000Z
null
false
8fa75c33299662b0b2b7f0d6ccbe33b9df3b62e5
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-xsum", "task_categories:summarization", "task_ids:summarization-factuality-checking" ]
https://huggingface.co/datasets/ylacombe/xsum_factuality/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|other-xsum task_categories: - summarization task_ids: - summarization-factuality-checking paperswithcode_id: null pretty_name: XSum Factuality Checking --- # Dataset Card for XSum Hallucination Annotations ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [XSUM Hallucination Annotations Homepage](https://research.google/tools/datasets/xsum-hallucination-annotations/) - **Repository:** [XSUM Hallucination Annotations Homepage](https://github.com/google-research-datasets/xsum_hallucination_annotations) - **Paper:** [ACL Web](https://www.aclweb.org/anthology/2020.acl-main.173.pdf) - **Point of Contact:** [xsum-hallucinations-acl20@google.com](mailto:xsum-hallucinations-acl20@google.com) ### Dataset Summary This is a modified version of "xsum_factuality" dataset, focusing only on factuality assessment. It was designed to be a ready-to-use small factuality-checking dataset. Concretely, the modifications are: * The complete original documents (i.e the news articles from XSUM) and gold-summaries were added. "xsum_factuality" was only pointing to the IDs of those documents. * The annotators assessments were grouped in the following fashion: I took the mean of these assessments (per summary/system pairs) where 0 was associated to a non-factuality judgment and 1 to a factuality judgment. ### Supported Tasks and Leaderboards * `summarization`: The dataset can be used to train a model for Summarization,, which consists in summarizing a given document. Success on this task is typically measured by achieving a *high/low* [ROUGE Score](https://huggingface.co/metrics/rouge). * `factuality_assessment`: Judging if a summary is factually aligned with the document. ### Languages The text in the dataset is in English which are abstractive summaries for the [XSum dataset](https://www.aclweb.org/anthology/D18-1206.pdf). The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances ##### Factuality annotations dataset A typical data point consists of an ID referring to the news article(complete document), golden summary, generated summary, and a float between 0 and 1, 1 corresponding to a factually correct generated summary. ### Data Fields ##### Factuality annotations dataset Raters are shown the news article and the hallucinated system summary, and are tasked with assessing the summary whether it is factual or not. The file contains the following columns: - `id`: Document id in the XSum corpus. - `system`: Name of neural summarizer. - `generated_summary`: Summary generated by ‘system’. - `summary`: Golden summary from the [XSum dataset](https://www.aclweb.org/anthology/D18-1206.pdf). - `label`: mean factuality assessment ### Data Splits There is only a single split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @InProceedings{maynez_acl20, author = "Joshua Maynez and Shashi Narayan and Bernd Bohnet and Ryan Thomas Mcdonald", title = "On Faithfulness and Factuality in Abstractive Summarization", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", year = "2020", pages = "1906--1919", address = "Online", } ``` ### Contributions Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset.
cmotions
null
null
null
false
3
false
cmotions/NL_restaurant_reviews
2022-04-21T11:20:02.000Z
null
false
ef485238c1494962da9f8896bfacbcf3a0747c73
[]
[ "language:nl", "tags:text-classification", "tags:sentiment-analysis", "datasets:train", "datasets:test", "datasets:validation" ]
https://huggingface.co/datasets/cmotions/NL_restaurant_reviews/resolve/main/README.md
--- language: - nl tags: - text-classification - sentiment-analysis datasets: - train - test - validation --- ## Dataset overview This is a dataset that contains restaurant reviews gathered in 2019 using a webscraping tool in Python. Reviews on restaurant visits and restaurant features were collected for Dutch restaurants. The dataset is formatted using the 🤗[DatasetDict](https://huggingface.co/docs/datasets/index) format and contains the following indices: - train, 116693 records - test, 14587 records - validation, 14587 records The dataset holds both information of the restaurant level as well as the review level and contains the following features: - [restaurant_ID] > unique restaurant ID - [restaurant_review_ID] > unique review ID - [michelin_label] > indicator whether this restaurant was awarded one (or more) Michelin stars prior to 2020 - [score_total] > restaurant level total score - [score_food] > restaurant level food score - [score_service] > restaurant level service score - [score_decor] > restaurant level decor score - [fame_reviewer] > label for how often a reviewer has posted a restaurant review - [reviewscore_food] > review level food score - [reviewscore_service] > review level service score - [reviewscore_ambiance] > review level ambiance score - [reviewscore_waiting] > review level waiting score - [reviewscore_value] > review level value for money score - [reviewscore_noise] > review level noise score - [review_text] > the full review that was written by the reviewer for this restaurant - [review_length] > total length of the review (tokens) ## Purpose The restaurant reviews submitted by visitor can be used to model the restaurant scores (food, ambiance etc) or used to model Michelin star holders. In [this blog series](https://medium.com/broadhorizon-cmotions/natural-language-processing-for-predictive-purposes-with-r-cb65f009c12b) we used the review texts to predict next Michelin star restaurants, using R.
AntoineLB
null
null
null
false
2
false
AntoineLB/FrozenLakeNotFrozen
2022-04-26T07:40:20.000Z
null
false
ec205ab74f5244e1cf50c06c200832cd50493546
[]
[]
https://huggingface.co/datasets/AntoineLB/FrozenLakeNotFrozen/resolve/main/README.md
# Dataset Card for [FrozenLake-v1] with slippery = False
mwong
null
null
null
false
3
false
mwong/climatetext-climate_evidence-claim-related-evaluation
2022-10-25T10:08:48.000Z
null
false
d96c3ca050b694c3150bb53e6c6431f2144ce15a
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "task_categories:text-classification", "task_ids:fact-checking" ]
https://huggingface.co/datasets/mwong/climatetext-climate_evidence-claim-related-evaluation/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gpl-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|climate_text task_categories: - text-classification task_ids: - fact-checking --- ### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a claim and climate related evidence, predict if claim is related to evidence.
mwong
null
null
null
false
2
false
mwong/climatetext-claim-climate_evidence-related-evaluation
2022-10-25T10:08:50.000Z
null
false
54b4fc98b56081e4ed5bfe6f76f68c8f52d4fc98
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "task_categories:text-classification", "task_ids:fact-checking" ]
https://huggingface.co/datasets/mwong/climatetext-claim-climate_evidence-related-evaluation/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gpl-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|climate_text task_categories: - text-classification task_ids: - fact-checking --- ### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a claim and climate related evidence, predict if evidence is related to claim.
mwong
null
null
null
false
3
false
mwong/climatetext-evidence-claim-pair-related-evaluation
2022-10-25T10:08:53.000Z
null
false
4f0fab91e806940ab0e95f573193eb79f5052c70
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "task_categories:text-classification", "task_ids:fact-checking" ]
https://huggingface.co/datasets/mwong/climatetext-evidence-claim-pair-related-evaluation/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gpl-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|climate_text task_categories: - text-classification task_ids: - fact-checking --- ### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related evidence and claim, predict if pair is related.
mwong
null
null
null
false
3
false
mwong/climatetext-claim-evidence-pair-related-evaluation
2022-10-25T10:08:55.000Z
null
false
0961ace6703a76cb598eb4fcdb7f92227aa3c4b3
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "task_categories:text-classification", "task_ids:fact-checking" ]
https://huggingface.co/datasets/mwong/climatetext-claim-evidence-pair-related-evaluation/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gpl-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|climate_text task_categories: - text-classification task_ids: - fact-checking --- ### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a claim and climate related evidence, predict if pair is related.
mweiss
null
@inproceedings{Weiss2022SimpleTechniques, title={Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning}, author={Weiss, Michael and Tonella, Paolo}, booktitle={Proceedings of the 31th ACM SIGSOFT International Symposium on Software Testing and Analysis}, year={2022} }
Fashion-MNIST is dataset of fashion images, indended as a drop-in replacement for the MNIST dataset. This dataset (Fashion-Mnist-Corrupted) provides out-of-distribution data for the Fashion-Mnist dataset. Fashion-Mnist-Corrupted is based on a similar project for MNIST, called MNIST-C, by Mu et. al.
false
3
false
mweiss/fashion_mnist_corrupted
2022-07-02T09:57:57.000Z
null
false
f0f97726ff1c5fcabaf48a458818dcde3967d728
[]
[ "arxiv:1906.02337", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:machine-generated", "language:en", "license:mit", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|fashion_mnist", "task_categories:image-...
https://huggingface.co/datasets/mweiss/fashion_mnist_corrupted/resolve/main/README.md
--- annotations_creators: - expert-generated - machine-generated language_creators: - machine-generated language: - en license: - mit multilinguality: - monolingual pretty_name: fashion-mnist-corrupted size_categories: - 10K<n<100K source_datasets: - extended|fashion_mnist task_categories: - image-classification task_ids: [] --- # Fashion-Mnist-C (Corrupted Fashion-Mnist) A corrupted Fashion-MNIST benchmark for testing out-of-distribution robustness of computer vision models, which were trained on Fashion-Mmnist. [Fashion-Mnist](https://github.com/zalandoresearch/fashion-mnist) is a drop-in replacement for MNIST and Fashion-Mnist-C is a corresponding drop-in replacement for [MNIST-C](https://arxiv.org/abs/1906.02337). ## Corruptions The following corruptions are applied to the images, equivalently to MNIST-C: - **Noise** (shot noise and impulse noise) - **Blur** (glass and motion blur) - **Transformations** (shear, scale, rotate, brightness, contrast, saturate, inverse) In addition, we apply various **image flippings and turnings**: For fashion images, flipping the image does not change its label, and still keeps it a valid image. However, we noticed that in the nominal fmnist dataset, most images are identically oriented (e.g. most shoes point to the left side). Thus, flipped images provide valid OOD inputs. Most corruptions are applied at a randomly selected level of *severity*, s.t. some corrupted images are really hard to classify whereas for others the corruption, while present, is subtle. ## Examples | Turned | Blurred | Rotated | Noise | Noise | Turned | | ------------- | ------------- | --------| --------- | -------- | --------- | | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_0.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_1.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_6.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_3.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_4.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_5.png" width="100" height="100"> | ## Citation If you use this dataset, please cite the following paper: ``` @inproceedings{Weiss2022SimpleTechniques, title={Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning}, author={Weiss, Michael and Tonella, Paolo}, booktitle={Proceedings of the 31th ACM SIGSOFT International Symposium on Software Testing and Analysis}, year={2022} } ``` Also, you may want to cite FMNIST and MNIST-C. ## Credits - Fashion-Mnist-C is inspired by Googles MNIST-C and our repository is essentially a clone of theirs. See their [paper](https://arxiv.org/abs/1906.02337) and [repo](https://github.com/google-research/mnist-c). - Find the nominal (i.e., non-corrupted) Fashion-MNIST dataset [here](https://github.com/zalandoresearch/fashion-mnist).
null
null
@inproceedings{krishnavisualgenome, title={Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations}, author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and Bernstein, Michael and Fei-Fei, Li}, year = {2016}, url = {https://arxiv.org/abs/1602.07332}, }
Visual Genome enable to model objects and relationships between objects. They collect dense annotations of objects, attributes, and relationships within each image. Specifically, the dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects.
false
79
false
visual_genome
2022-11-03T15:51:00.000Z
visual-genome
false
b048e99d443ad70499a0a9ffbaf90e5d1dd36cb7
[]
[ "arxiv:1602.07332", "annotations_creators:found", "language_creators:found", "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "task_categories:image-to-text", "task_categories:object-detection", "task_categories:visual-qu...
https://huggingface.co/datasets/visual_genome/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - image-to-text - object-detection - visual-question-answering task_ids: - image-captioning paperswithcode_id: visual-genome pretty_name: VisualGenome configs: - objects - question_answers - region_descriptions dataset_info: features: - name: image dtype: image - name: image_id dtype: int32 - name: url dtype: string - name: width dtype: int32 - name: height dtype: int32 - name: coco_id dtype: int64 - name: flickr_id dtype: int64 - name: regions list: - name: region_id dtype: int32 - name: image_id dtype: int32 - name: phrase dtype: string - name: x dtype: int32 - name: y dtype: int32 - name: width dtype: int32 - name: height dtype: int32 config_name: region_descriptions_v1.0.0 splits: - name: train num_bytes: 260873884 num_examples: 108077 download_size: 15304605295 dataset_size: 260873884 --- # Dataset Card for Visual Genome ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://visualgenome.org/ - **Repository:** - **Paper:** https://visualgenome.org/static/paper/Visual_Genome.pdf - **Leaderboard:** - **Point of Contact:** ranjaykrishna [at] gmail [dot] com ### Dataset Summary Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. From the paper: > Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked “What vehicle is the person riding?”, computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that “the person is riding a horse-drawn carriage.” Visual Genome has: - 108,077 image - 5.4 Million Region Descriptions - 1.7 Million Visual Question Answers - 3.8 Million Object Instances - 2.8 Million Attributes - 2.3 Million Relationships From the paper: > Our dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. ### Dataset Preprocessing ### Supported Tasks and Leaderboards ### Languages All of annotations use English as primary language. ## Dataset Structure ### Data Instances When loading a specific configuration, users has to append a version dependent suffix: ```python from datasets import load_dataset load_dataset("visual_genome", "region_description_v1.2.0") ``` #### region_descriptions An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "regions": [ { "region_id": 1382, "image_id": 1, "phrase": "the clock is green in colour", "x": 421, "y": 57, "width": 82, "height": 139 }, ... ] } ``` #### objects An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "objects": [ { "object_id": 1058498, "x": 421, "y": 91, "w": 79, "h": 339, "names": [ "clock" ], "synsets": [ "clock.n.01" ] }, ... ] } ``` #### attributes An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "attributes": [ { "object_id": 1058498, "x": 421, "y": 91, "w": 79, "h": 339, "names": [ "clock" ], "synsets": [ "clock.n.01" ], "attributes": [ "green", "tall" ] }, ... } ] ``` #### relationships An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "relationships": [ { "relationship_id": 15927, "predicate": "ON", "synsets": "['along.r.01']", "subject": { "object_id": 5045, "x": 119, "y": 338, "w": 274, "h": 192, "names": [ "shade" ], "synsets": [ "shade.n.01" ] }, "object": { "object_id": 5046, "x": 77, "y": 328, "w": 714, "h": 262, "names": [ "street" ], "synsets": [ "street.n.01" ] } } ... } ] ``` #### question_answers An example of looks as follows. ``` { "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>, "image_id": 1, "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg", "width": 800, "height": 600, "coco_id": null, "flickr_id": null, "qas": [ { "qa_id": 986768, "image_id": 1, "question": "What color is the clock?", "answer": "Green.", "a_objects": [], "q_objects": [] }, ... } ] ``` ### Data Fields When loading a specific configuration, users has to append a version dependent suffix: ```python from datasets import load_dataset load_dataset("visual_genome", "region_description_v1.2.0") ``` #### region_descriptions - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `regions`: Holds a list of `Region` dataclasses: - `region_id`: Unique numeric ID of the region. - `image_id`: Unique numeric ID of the image. - `x`: x coordinate of bounding box's top left corner. - `y`: y coordinate of bounding box's top left corner. - `width`: Bounding box width. - `height`: Bounding box height. #### objects - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `objects`: Holds a list of `Object` dataclasses: - `object_id`: Unique numeric ID of the object. - `x`: x coordinate of bounding box's top left corner. - `y`: y coordinate of bounding box's top left corner. - `w`: Bounding box width. - `h`: Bounding box height. - `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg - `synsets`: List of `WordNet synsets`. #### attributes - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `attributes`: Holds a list of `Object` dataclasses: - `object_id`: Unique numeric ID of the region. - `x`: x coordinate of bounding box's top left corner. - `y`: y coordinate of bounding box's top left corner. - `w`: Bounding box width. - `h`: Bounding box height. - `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg - `synsets`: List of `WordNet synsets`. - `attributes`: List of attributes associated with the object. #### relationships - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `relationships`: Holds a list of `Relationship` dataclasses: - `relationship_id`: Unique numeric ID of the object. - `predicate`: Predicate defining relationship between a subject and an object. - `synsets`: List of `WordNet synsets`. - `subject`: Object dataclass. See subsection on `objects`. - `object`: Object dataclass. See subsection on `objects`. #### question_answers - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `image_id`: Unique numeric ID of the image. - `url`: URL of source image. - `width`: Image width. - `height`: Image height. - `coco_id`: Id mapping to MSCOCO indexing. - `flickr_id`: Id mapping to Flicker indexing. - `qas`: Holds a list of `Question-Answering` dataclasses: - `qa_id`: Unique numeric ID of the question-answer pair. - `image_id`: Unique numeric ID of the image. - `question`: Question. - `answer`: Answer. - `q_objects`: List of object dataclass associated with `question` field. See subsection on `objects`. - `a_objects`: List of object dataclass associated with `answer` field. See subsection on `objects`. ### Data Splits All the data is contained in training set. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? From the paper: > We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33, 000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs. Each HIT was designed such that workers manage to earn anywhere between $6-$8 per hour if they work continuously, in line with ethical research standards on Mechanical Turk (Salehi et al., 2015). Visual Genome HITs achieved a 94.1% retention rate, meaning that 94.1% of workers who completed one of our tasks went ahead to do more. [...] 93.02% of workers contributed from the United States. The majority of our workers were between the ages of 25 and 34 years old. Our youngest contributor was 18 years and the oldest was 68 years old. We also had a near-balanced split of 54.15% male and 45.85% female workers. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Visual Genome by Ranjay Krishna is licensed under a Creative Commons Attribution 4.0 International License. ### Citation Information ```bibtex @inproceedings{krishnavisualgenome, title={Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations}, author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and Bernstein, Michael and Fei-Fei, Li}, year = {2016}, url = {https://arxiv.org/abs/1602.07332}, } ``` ### Contributions Due to limitation of the dummy_data creation, we provide a `fix_generated_dummy_data.py` script that fix the dataset in-place. Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset.
tsantosh7
null
null
null
false
2
false
tsantosh7/COVID-19_Annotations
2022-04-21T14:03:06.000Z
null
false
b31afad97a9fada96522cc2f5b080338d4a3f7cd
[]
[ "license:cc" ]
https://huggingface.co/datasets/tsantosh7/COVID-19_Annotations/resolve/main/README.md
--- license: cc --- Named Entity Recognition for COVID-19 Bio Entities The dataset was taken from https://github.com/davidcampos/covid19-corpus Dataset The dataset was then split into several datasets each one representing one entity. Namely, Disorder, Species, Chemical or Drug, Gene and Protein, Enzyme, Anatomy, Biological Process, Molecular Function, Cellular Component, Pathway and microRNA. Moreover, another dataset is also created with all those aforementioned that are non-overlapping in nature. Dataset Formats The datasets are available in two formats IOB and Spacy's JSONL format. IOB : https://github.com/tsantosh7/COVID-19-Named-Entity-Recognition/tree/master/Datasets/BIO SpaCy JSONL: https://github.com/tsantosh7/COVID-19-Named-Entity-Recognition/tree/master/Datasets/SpaCy
null
null
@inproceedings{harley2015icdar, title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval}, author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis}, booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}}, year = {2015} }
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images.
false
969
false
rvl_cdip
2022-11-03T16:31:22.000Z
rvl-cdip
false
6cecec2310858601b96cfec27931cf8b408efe5c
[]
[ "arxiv:1502.07058", "annotations_creators:found", "language_creators:found", "language:en", "license:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|iit_cdip", "task_categories:image-classification", "task_ids:multi-class-image-classification" ]
https://huggingface.co/datasets/rvl_cdip/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - en license: - other multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|iit_cdip task_categories: - image-classification task_ids: - multi-class-image-classification paperswithcode_id: rvl-cdip pretty_name: RVL-CDIP dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: 0: letter 1: form 2: email 3: handwritten 4: advertisement 5: scientific report 6: scientific publication 7: specification 8: file folder 9: news article 10: budget 11: invoice 12: presentation 13: questionnaire 14: resume 15: memo splits: - name: test num_bytes: 4863300853 num_examples: 40000 - name: train num_bytes: 38816373360 num_examples: 320000 - name: validation num_bytes: 4868685208 num_examples: 40000 download_size: 38779484559 dataset_size: 48548359421 --- # Dataset Card for RVL-CDIP ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [The RVL-CDIP Dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/) - **Repository:** - **Paper:** [Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval](https://arxiv.org/abs/1502.07058) - **Leaderboard:** [RVL-CDIP leaderboard](https://paperswithcode.com/dataset/rvl-cdip) - **Point of Contact:** [Adam W. Harley](mailto:aharley@cmu.edu) ### Dataset Summary The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available [here](https://paperswithcode.com/sota/document-image-classification-on-rvl-cdip). ### Languages All the classes and documents use English as their primary language. ## Dataset Structure ### Data Instances A sample from the training set is provided below : ``` { 'image': <PIL.TiffImagePlugin.TiffImageFile image mode=L size=754x1000 at 0x7F9A5E92CA90>, 'label': 15 } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing a document. - `label`: an `int` classification label. <details> <summary>Class Label Mappings</summary> ```json { "0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo" } ``` </details> ### Data Splits | |train|test|validation| |----------|----:|----:|---------:| |# of examples|320000|40000|40000| The dataset was split in proportions similar to those of ImageNet. - 320000 images were used for training, - 40000 images for validation, and - 40000 images for testing. ## Dataset Creation ### Curation Rationale From the paper: > This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000 document images across 16 categories, useful for training new CNNs for document analysis. ### Source Data #### Initial Data Collection and Normalization The same as in the IIT-CDIP collection. #### Who are the source language producers? The same as in the IIT-CDIP collection. ### Annotations #### Annotation process The same as in the IIT-CDIP collection. #### Who are the annotators? The same as in the IIT-CDIP collection. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis. ### Licensing Information RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/). ### Citation Information ```bibtex @inproceedings{harley2015icdar, title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval}, author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis}, booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}}, year = {2015} } ``` ### Contributions Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset.
Goud
null
null
null
false
11
false
Goud/Goud-sum
2022-07-04T16:02:36.000Z
null
false
36076b03a64c3dc168fa7222da61de07b6eac67e
[]
[ "annotations_creators:no-annotation", "language_creators:machine-generated", "size_categories:100K<n<1M", "source_datasets:original", "task_categories:summarization", "task_ids:news-articles-headline-generation" ]
https://huggingface.co/datasets/Goud/Goud-sum/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - machine-generated language: [] license: [] multilinguality: [] pretty_name: Goud-sum size_categories: - 100K<n<1M source_datasets: - original task_categories: - summarization task_ids: - news-articles-headline-generation --- # Dataset Card for Goud summarization dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[Needs More Information] - **Repository:**[Needs More Information] - **Paper:**[Goud.ma: a News Article Dataset for Summarization in Moroccan Darija](https://openreview.net/forum?id=BMVq5MELb9) - **Leaderboard:**[Needs More Information] - **Point of Contact:**[Needs More Information] ### Dataset Summary Goud-sum contains 158k articles and their headlines extracted from [Goud.ma](https://www.goud.ma/) news website. The articles are written in the Arabic script. All headlines are in Moroccan Darija, while articles may be in Moroccan Darija, in Modern Standard Arabic, or a mix of both (code-switched Moroccan Darija). ### Supported Tasks and Leaderboards Text Summarization ### Languages * Moroccan Arabic (Darija) * Modern Standard Arabic ## Dataset Structure ### Data Instances The dataset consists of article-headline pairs in string format. ### Data Fields * article: a string containing the body of the news article * headline: a string containing the article's headline * categories: a list of string of article categories ### Data Splits Goud-sum dataset has 3 splits: _train_, _validation_, and _test_. Below are the number of instances in each split. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 139,288 | | Validation | 9,497 | | Test | 9,497 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The text was written by journalists at [Goud](https://www.goud.ma/). ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{issam2022goudma, title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija}, author={Abderrahmane Issam and Khalil Mrini}, booktitle={3rd Workshop on African Natural Language Processing}, year={2022}, url={https://openreview.net/forum?id=BMVq5MELb9} } ``` ### Contributions Thanks to [@issam9](https://github.com/issam9) and [@KhalilMrini](https://github.com/KhalilMrini) for adding this dataset.
eleldar
null
null
null
false
1
false
eleldar/sub_train-normal_tests-datasets
2022-06-16T11:19:47.000Z
null
false
24eab2c29829f2672c4a9516f0d7aa750b88ba61
[]
[]
https://huggingface.co/datasets/eleldar/sub_train-normal_tests-datasets/resolve/main/README.md
Dataset for API: https://github.com/eleldar/Translation Test English-Russian dataset: ``` DatasetDict({ normal: Dataset({ features: ['en', 'ru'], num_rows: 2009 }) short: Dataset({ features: ['en', 'ru'], num_rows: 2664 }) train: Dataset({ features: ['en', 'ru'], num_rows: 1660 }) validation: Dataset({ features: ['en', 'ru'], num_rows: 208 }) test: Dataset({ features: ['en', 'ru'], num_rows: 4170 }) }) ``` The dataset get from tables: * https://github.com/eleldar/Translator/blob/master/test_dataset/flores101_dataset/101_languages.xlsx?raw=true * https://github.com/eleldar/Translator/blob/master/test_dataset/normal.xlsx?raw=true * https://github.com/eleldar/Translator/blob/master/test_dataset/corrected_vocab.xlsx?raw=true
GEM-submissions
null
null
null
false
3
false
GEM-submissions/ratishsp__seqplan-sportsett__1650556902
2022-04-21T16:01:45.000Z
null
false
1f2761557622d85a47d719882e5e8654f2c4dec1
[]
[ "benchmark:gem", "type:prediction", "submission_name:SeqPlan-SportSett", "tags:evaluation", "tags:benchmark" ]
https://huggingface.co/datasets/GEM-submissions/ratishsp__seqplan-sportsett__1650556902/resolve/main/README.md
--- benchmark: gem type: prediction submission_name: SeqPlan-SportSett tags: - evaluation - benchmark --- # GEM Submission Submission name: SeqPlan-SportSett
pietrolesci
null
null
null
false
707
false
pietrolesci/glue_diagnostics
2022-04-21T16:51:56.000Z
null
false
1f2a598128b862851ba63f35a9d7c277c005e2d7
[]
[]
https://huggingface.co/datasets/pietrolesci/glue_diagnostics/resolve/main/README.md
## Overview Original dataset available [here](https://gluebenchmark.com/diagnostics). ## Dataset curation Filled in the empty rows of columns "lexical semantics", "predicate-argument structure", "logic", "knowledge" with empty string `""`. Labels are encoded as follows ``` {"entailment": 0, "neutral": 1, "contradiction": 2} ``` ## Code to create dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset df = pd.read_csv("<path to file>/diagnostic-full.tsv", sep="\t") # column names to lower df.columns = df.columns.str.lower() # fill na assert df["label"].isna().sum() == 0 df = df.fillna("") # encode labels df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features({ "lexical semantics": Value(dtype="string", id=None), "predicate-argument structure": Value(dtype="string", id=None), "logic": Value(dtype="string", id=None), "knowledge": Value(dtype="string", id=None), "domain": Value(dtype="string", id=None), "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), }) dataset = Dataset.from_pandas(df, features=features) dataset.push_to_hub("glue_diagnostics", token="<token>", split="test") ```
patrickvonplaten
null
@inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} }
LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.87
false
2
false
patrickvonplaten/librispeech_asr_self_contained
2022-10-24T17:48:37.000Z
librispeech-1
false
bb68655c6b6f1431cdf2b90239cbf2fb5e52f3cd
[]
[ "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "task_categories:automatic-speech-recognition", "task_categor...
https://huggingface.co/datasets/patrickvonplaten/librispeech_asr_self_contained/resolve/main/README.md
--- pretty_name: LibriSpeech annotations_creators: - expert-generated language_creators: - crowdsourced - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual paperswithcode_id: librispeech-1 size_categories: - 100K<n<1M source_datasets: - original task_categories: - automatic-speech-recognition - audio-classification task_ids: - audio-speaker-identification --- # Dataset Card for librispeech_asr ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12) - **Repository:** [Needs More Information] - **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf) - **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-other) - **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com) ### Dataset Summary LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. ### Supported Tasks and Leaderboards - `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean and ranks models based on their WER. ### Languages The audio is in English. There are two configurations: `clean` and `other`. The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on a different dataset, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other". ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided. ``` {'chapter_id': 141231, 'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac', 'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'id': '1272-141231-0000', 'speaker_id': 1272, 'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'} ``` ### Data Fields - file: A path to the downloaded audio file in .flac format. - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - text: the transcription of the audio file. - id: unique id of the data sample. - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples. - chapter_id: id of the audiobook chapter which includes the transcription. ### Data Splits The size of the corpus makes it impractical, or at least inconvenient for some users, to distribute it as a single large archive. Thus the training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively. A simple automatic procedure was used to select the audio in the first two sets to be, on average, of higher recording quality and with accents closer to US English. An acoustic model was trained on WSJ’s si-84 data subset and was used to recognize the audio in the corpus, using a bigram LM estimated on the text of the respective books. We computed the Word Error Rate (WER) of this automatic transcript relative to our reference transcripts obtained from the book texts. The speakers in the corpus were ranked according to the WER of the WSJ model’s transcripts, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other". For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360 respectively accounting for 100h and 360h of the training data. For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech. | | Train.500 | Train.360 | Train.100 | Valid | Test | | ----- | ------ | ----- | ---- | ---- | ---- | | clean | - | 104014 | 28539 | 2703 | 2620| | other | 148688 | - | - | 2864 | 2939 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
BritishLibraryLabs
null
TODO
The images were algorithmically gathered from 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900. The books cover a wide range of subject areas including philosophy, history, poetry and literature. The images are in .JPEG format.
false
5
false
BritishLibraryLabs/digitisedbookimages
2022-10-25T14:34:03.000Z
null
true
07e16b87691219403626bf9de8fc661fdf8289f3
[]
[ "language_creators:machine-generated", "license:cc0-1.0", "source_datasets:original", "task_categories:image-classification" ]
https://huggingface.co/datasets/BritishLibraryLabs/digitisedbookimages/resolve/main/README.md
adithya7
null
@article{pratapa-etal-2022-multilingual, title = {Multilingual Event Linking to Wikidata}, author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko}, publisher = {arXiv}, year = {2022}, url = {https://arxiv.org/abs/2204.06535}, }
XLEL-WD is a multilingual event linking dataset. This sub-dataset contains a dictionary of events from Wikidata. The multilingual descriptions for Wikidata event items are taken from the corresponding Wikipedia articles.
false
16
false
adithya7/xlel_wd_dictionary
2022-07-01T17:30:21.000Z
null
false
996e72dea151ca0856d1d16efd71f560b18da817
[]
[ "arxiv:2204.06535", "annotations_creators:found", "language_creators:found", "language:af", "language:ar", "language:be", "language:bg", "language:bn", "language:ca", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:fa", "language:fi...
https://huggingface.co/datasets/adithya7/xlel_wd_dictionary/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - af - ar - be - bg - bn - ca - cs - da - de - el - en - es - fa - fi - fr - he - hi - hu - id - it - ja - ko - ml - mr - ms - nl - 'no' - pl - pt - ro - ru - si - sk - sl - sr - sv - sw - ta - te - th - tr - uk - vi - zh license: - cc-by-4.0 multilinguality: - multilingual pretty_name: XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles. size_categories: - 10K<n<100K source_datasets: - original task_categories: [] task_ids: [] --- # Dataset Card for XLEL-WD-Dictionary ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** <https://github.com/adithya7/xlel-wd> - **Repository:** <https://github.com/adithya7/xlel-wd> - **Paper:** <https://arxiv.org/abs/2204.06535> - **Leaderboard:** N/A - **Point of Contact:** Adithya Pratapa ### Dataset Summary XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles. ### Supported Tasks and Leaderboards This dictionary can be used as a part of the event linking task. ### Languages This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper. | Language | Code | Language | Code | Language | Code | Language | Code | | -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- | | Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg | | Bengali | bn | Catalan | ca | Czech | cs | Danish | da | | German | de | Greek | el | English | en | Spanish | es | | Persian | fa | Finnish | fi | French | fr | Hebrew | he | | Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it | | Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr | | Malay | ms | Dutch | nl | Norwegian | no | Polish | pl | | Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si | | Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv | | Swahili | sw | Tamil | ta | Telugu | te | Thai | th | | Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh | ## Dataset Structure ### Data Instances Each instance in the `label_dict.jsonl` file follows the below template, ```json { "label_id": "830917", "label_title": "2010 European Aquatics Championships", "label_desc": "The 2010 European Aquatics Championships were held from 4–15 August 2010 in Budapest and Balatonfüred, Hungary. It was the fourth time that the city of Budapest hosts this event after 1926, 1958 and 2006. Events in swimming, diving, synchronised swimming (synchro) and open water swimming were scheduled.", "label_lang": "en" } ``` ### Data Fields | Field | Meaning | | ----- | ------- | | `label_id` | Wikidata ID | | `label_title` | Title for the event, as collected from the corresponding Wikipedia article | | `label_desc` | Description for the event, as collected from the corresponding Wikipedia article | | `label_lang` | language used for the title and description | ### Data Splits This dictionary has a single split, `dictionary`. It contains 10947 event items from Wikidata and a total of 114834 text descriptions collected from multilingual Wikipedia articles. ## Dataset Creation ### Curation Rationale This datasets helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. Event items are collected from Wikidata. ### Source Data #### Initial Data Collection and Normalization A Wikidata item is considered a potential event if it has spatial and temporal properties. The final event set is collected after post-processing for quality control. #### Who are the source language producers? The titles and descriptions for the events are written by Wikipedia contributors. ### Annotations #### Annotation process This dataset was automatically compiled from Wikidata. It was post-processed to improve data quality. #### Who are the annotators? Wikidata and Wikipedia contributors. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations This dictionary primarily contains eventive nouns from Wikidata. It does not include other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676), war (Q198), etc., ## Additional Information ### Dataset Curators The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd). ### Licensing Information XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ```bib @article{pratapa-etal-2022-multilingual, title = {Multilingual Event Linking to Wikidata}, author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko}, publisher = {arXiv}, year = {2022}, url = {https://arxiv.org/abs/2204.06535}, } ``` ### Contributions Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset.
adithya7
null
@article{pratapa-etal-2022-multilingual, title = {Multilingual Event Linking to Wikidata}, author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko}, publisher = {arXiv}, year = {2022}, url = {https://arxiv.org/abs/2204.06535}, }
XLEL-WD is a multilingual event linking dataset. This dataset contains mention references from multilingual Wikipedia/Wikinews articles to event items in Wikidata. The text descriptions for Wikidata events are compiled from Wikipedia articles.
false
18
false
adithya7/xlel_wd
2022-07-13T07:46:57.000Z
null
false
a6d542d37b24cc1f2536af5e4afb850b9641e3ff
[]
[ "arxiv:2204.06535", "annotations_creators:found", "language_creators:found", "language:af", "language:ar", "language:be", "language:bg", "language:bn", "language:ca", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:fa", "language:fi...
https://huggingface.co/datasets/adithya7/xlel_wd/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - af - ar - be - bg - bn - ca - cs - da - de - el - en - es - fa - fi - fr - he - hi - hu - id - it - ja - ko - ml - mr - ms - nl - 'no' - pl - pt - ro - ru - si - sk - sl - sr - sv - sw - ta - te - th - tr - uk - vi - zh license: - cc-by-4.0 multilinguality: - multilingual pretty_name: XLEL-WD is a multilingual event linking dataset. This dataset contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding Wikipedia articles. size_categories: - 1M<n<10M source_datasets: - original task_categories: [] task_ids: [] --- # Dataset Card for XLEL-WD ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** <https://github.com/adithya7/xlel-wd> - **Repository:** <https://github.com/adithya7/xlel-wd> - **Paper:** <https://arxiv.org/abs/2204.06535> - **Leaderboard:** N/A - **Point of Contact:** Adithya Pratapa ### Dataset Summary XLEL-WD is a multilingual event linking dataset. This dataset repo contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata. The descriptions for Wikidata event items were collected from the corresponding Wikipedia articles. Download the event dictionary from [adithya7/xlel_wd_dictionary](https://huggingface.co/datasets/adithya7/xlel_wd_dictionary). ### Supported Tasks and Leaderboards This dataset can be used for the task of event linking. There are two variants of the task, multilingual and crosslingual. - Multilingual linking: mention and the event descriptions are in the same language. - Crosslingual linking: the event descriptions are only available in English. ### Languages This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper. | Language | Code | Language | Code | Language | Code | Language | Code | | -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- | | Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg | | Bengali | bn | Catalan | ca | Czech | cs | Danish | da | | German | de | Greek | el | English | en | Spanish | es | | Persian | fa | Finnish | fi | French | fr | Hebrew | he | | Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it | | Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr | | Malay | ms | Dutch | nl | Norwegian | no | Polish | pl | | Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si | | Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv | | Swahili | sw | Tamil | ta | Telugu | te | Thai | th | | Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh | ## Dataset Structure ### Data Instances Each instance in the `train.jsonl`, `dev.jsonl` and `test.jsonl` files follow the below template. ```json { "context_left": "Minibaev's first major international medal came in the men's synchronized 10 metre platform event at the ", "mention": "2010 European Championships", "context_right": ".", "context_lang": "en", "label_id": "830917", } ``` ### Data Fields | Field | Meaning | | ----- | ------- | | `mention` | text span of the mention | | `context_left` | left paragraph context from the document | | `context_right` | right paragraph context from the document | | `context_lang` | language of the context (and mention) | | `context_title` | document title of the mention (only Wikinews subset) | | `context_date` | document publication date of the mention (only Wikinews subset) | | `label_id` | Wikidata label ID for the event. E.g. 830917 refers to Q830917 from Wikidata. | ### Data Splits The Wikipedia-based corpus has three splits. This is a zero-shot evaluation setup. | | Train | Dev | Test | Total | | ---- | :-----: | :---: | :----: | :-----: | | Events | 8653 | 1090 | 1204 | 10947 | | Event Sequences | 6758 | 844 | 846 | 8448 | | Mentions | 1.44M | 165K | 190K | 1.8M | | Languages | 44 | 44 | 44 | 44 | The Wikinews-based evaluation set has two variants, one for cross-domain evaluation and another for zero-shot evaluation. | | (Cross-domain) Test | (Zero-shot) Test | | --- | :------------------: | :-----: | | Events | 802 | 149 | | Mentions | 2562 | 437 | | Languages | 27 | 21 | ## Dataset Creation ### Curation Rationale This dataset helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. We use Wikidata as our KB, as it allows for linking mentions from multilingual Wikipedia and Wikinews articles. ### Source Data #### Initial Data Collection and Normalization First, we utilize spatial & temporal properties from Wikidata to identify event items. Second, we identify corresponding multilingual Wikipedia pages for each Wikidata event item. Third, we pool hyperlinks from multilingual Wikipedia & Wikinews articles to these event items. #### Who are the source language producers? The documents in XLEL-WD are written by Wikipedia and Wikinews contributors in respective languages. ### Annotations #### Annotation process This dataset was originally collected automatically from Wikipedia, Wikinews and Wikidata. It was post-processed to improve data quality. #### Who are the annotators? The annotations in XLEL-WD (hyperlinks from Wikipedia/Wikinews to Wikidata) are added the original Wiki contributors. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations XLEL-WD v1.0.0 mostly caters to eventive nouns from Wikidata. It does not include any links to other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676) and war (Q198). ## Additional Information ### Dataset Curators The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd). ### Licensing Information XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ```bib @article{pratapa-etal-2022-multilingual, title = {Multilingual Event Linking to Wikidata}, author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko}, publisher = {arXiv}, year = {2022}, url = {https://arxiv.org/abs/2204.06535}, } ``` ### Contributions Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset.
Lumos
null
null
null
false
3
false
Lumos/imdb_test
2022-04-22T03:11:35.000Z
null
false
8938cae73cbe122b5af2f8d483f48e3112f533e6
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/Lumos/imdb_test/resolve/main/README.md
--- license: apache-2.0 ---
AntoineLB
null
null
null
false
3
false
AntoineLB/FrozenLakeFrozen
2022-04-22T07:57:15.000Z
null
false
a156ba94142aa70a7ed31153a815f3990d87ff03
[]
[]
https://huggingface.co/datasets/AntoineLB/FrozenLakeFrozen/resolve/main/README.md
# Dataset Card for [FrozenLake-v1] with slippery = True
pietrolesci
null
null
null
false
2
false
pietrolesci/fracas
2022-04-25T08:40:07.000Z
null
false
b9dee7e7cf675ed6f2b97378b8de74920162b617
[]
[]
https://huggingface.co/datasets/pietrolesci/fracas/resolve/main/README.md
## Overview Original dataset [here](https://github.com/felipessalvatore/NLI_datasets). Below the original description reported for convenience. ```latex @MISC{Fracas96, author = {{The Fracas Consortium} and Robin Cooper and Dick Crouch and Jan Van Eijck and Chris Fox and Josef Van Genabith and Jan Jaspars and Hans Kamp and David Milward and Manfred Pinkal and Massimo Poesio and Steve Pulman and Ted Briscoe and Holger Maier and Karsten Konrad}, title = {Using the Framework}, year = {1996} } ``` Adapted from [https://nlp.stanford.edu/~wcmac/downloads/fracas.xml](https://nlp.stanford.edu/~wcmac/downloads/fracas.xml). We took `P1, ..., Pn` as premise and H as hypothesis. Labels have been mapped as follows `{'yes': "entailment", 'no': 'contradiction', 'undef': "neutral", 'unknown': "neutral"}`. And we randomly split 80/20 for train/dev. ## Dataset curation One hypothesis in the dev set and three hypotheses in the train set are empty and have been filled in with the empty string `""`. Labels are encoded with custom NLI mapping, that is ``` {"entailment": 0, "neutral": 1, "contradiction": 2} ``` ## Code to create the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset from pathlib import Path # load datasets path = Path("<path to folder>/nli_datasets") datasets = {} for dataset_path in path.iterdir(): datasets[dataset_path.name] = {} for name in dataset_path.iterdir(): df = pd.read_csv(name) datasets[dataset_path.name][name.name.split(".")[0]] = df ds = {} for name, df_ in datasets["fracas"].items(): df = df_.copy() assert df["label"].isna().sum() == 0 # fill-in empty hypothesis df = df.fillna("") # encode labels df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features({ "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), }) ds[name] = Dataset.from_pandas(df, features=features) dataset = DatasetDict(ds) dataset.push_to_hub("fracas", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["label", "premise", "hypothesis"], how="inner", ).shape[0], ) #> train - dev: 0 ```
loretoparisi
null
null
null
false
3
false
loretoparisi/tatoeba-sentences
2022-04-27T17:26:31.000Z
null
false
af87ac826a01c8ce7aaed0015c8710cee48007bc
[]
[ "license:cc-by-2-0" ]
https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/resolve/main/README.md
--- license: cc-by-2-0 --- licenses: - cc-by-2-0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: tatoeba pretty_name: Tatoeba --- # Dataset Card for Tatoeba ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/Tatoeba.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary Tatoeba is a collection of sentences and translations. To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Tatoeba.php E.g. `dataset = load_dataset("tatoeba", lang1="en", lang2="he")` The default date is v2021-07-22, but you can also change the date with `dataset = load_dataset("tatoeba", lang1="en", lang2="he", date="v2020-11-09")` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [@loretoparisi](https://github.com/loretoparisi)
pietrolesci
null
null
null
false
3
false
pietrolesci/scitail
2022-04-25T10:40:47.000Z
null
false
36dbc520e45ddad0b14c6526ebbae8ed01bc5d7c
[]
[]
https://huggingface.co/datasets/pietrolesci/scitail/resolve/main/README.md
## Overview Original dataset is available on the HuggingFace Hub [here](https://huggingface.co/datasets/scitail). ## Dataset curation This is the same as the `snli_format` split of the SciTail dataset available on the HuggingFace Hub (i.e., same data, same splits, etc). The only differences are the following: - selecting only the columns `["sentence1", "sentence2", "gold_label", "label"]` - renaming columns with the following mapping `{"sentence1": "premise", "sentence2": "hypothesis"}` - creating a new column "label" from "gold_label" with the following mapping `{"entailment": "entailment", "neutral": "not_entailment"}` - encoding labels with the following mapping `{"not_entailment": 0, "entailment": 1}` Note that there are 10 overlapping instances (as found by merging on columns "label", "premise", and "hypothesis") between `train` and `test` splits. ## Code to create the dataset ```python from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset # load datasets from the Hub dd = load_dataset("scitail", "snli_format") ds = {} for name, df_ in dd.items(): df = df_.to_pandas() # select important columns df = df[["sentence1", "sentence2", "gold_label"]] # rename columns df = df.rename(columns={"sentence1": "premise", "sentence2": "hypothesis"}) # encode labels df["label"] = df["gold_label"].map({"entailment": "entailment", "neutral": "not_entailment"}) df["label"] = df["label"].map({"not_entailment": 0, "entailment": 1}) # cast to dataset features = Features({ "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "label": ClassLabel(num_classes=2, names=["not_entailment", "entailment"]), }) ds[name] = Dataset.from_pandas(df, features=features) dataset = DatasetDict(ds) dataset.push_to_hub("scitail", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(dataset.keys(), 2): print( f"{i} - {j}: ", pd.merge( dataset[i].to_pandas(), dataset[j].to_pandas(), on=["label", "premise", "hypothesis"], how="inner", ).shape[0], ) #> train - test: 10 #> train - validation: 0 #> test - validation: 0 ```
taln-ls2n
null
@inproceedings{bougouin-etal-2016-termith, title = "{T}erm{ITH}-Eval: a {F}rench Standard-Based Resource for Keyphrase Extraction Evaluation", author = "Bougouin, Adrien and Barreaux, Sabine and Romary, Laurent and Boudin, Florian and Daille, B{\'e}atrice", booktitle = "Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}'16)", month = may, year = "2016", address = "Portoro{\v{z}}, Slovenia", publisher = "European Language Resources Association (ELRA)", url = "https://aclanthology.org/L16-1304", pages = "1924--1927", abstract = "Keyphrase extraction is the task of finding phrases that represent the important content of a document. The main aim of keyphrase extraction is to propose textual units that represent the most important topics developed in a document. The output keyphrases of automatic keyphrase extraction methods for test documents are typically evaluated by comparing them to manually assigned reference keyphrases. Each output keyphrase is considered correct if it matches one of the reference keyphrases. However, the choice of the appropriate textual unit (keyphrase) for a topic is sometimes subjective and evaluating by exact matching underestimates the performance. This paper presents a dataset of evaluation scores assigned to automatically extracted keyphrases by human evaluators. Along with the reference keyphrases, the manual evaluations can be used to validate new evaluation measures. Indeed, an evaluation measure that is highly correlated to the manual evaluation is appropriate for the evaluation of automatic keyphrase extraction methods.", }
TermITH-Eval benchmark dataset for keyphrase extraction an generation.
false
3
false
taln-ls2n/termith-eval
2022-09-23T07:49:04.000Z
null
false
2dceb8142327bf9eac3ff8927e2f39533a4afc8e
[]
[ "annotations_creators:unknown", "language_creators:unknown", "language:fr", "license:cc-by-4.0", "multilinguality:multilingual", "task_categories:text-generation", "task_ids:keyphrase-generation", "task_ids:keyphrase-extraction", "size_categories:n<1K" ]
https://huggingface.co/datasets/taln-ls2n/termith-eval/resolve/main/README.md
--- annotations_creators: - unknown language_creators: - unknown language: - fr license: cc-by-4.0 multilinguality: - multilingual task_categories: - text-mining - text-generation task_ids: - keyphrase-generation - keyphrase-extraction size_categories: - n<1K pretty_name: TermITH-Eval --- # TermITH-Eval Benchmark Dataset for Keyphrase Generation ## About TermITH-Eval is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 400 abstracts of scientific papers in French collected from the FRANCIS and PASCAL databases of the French [Institute for Scientific and Technical Information (Inist)](https://www.inist.fr/). Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries). Details about the dataset can be found in the original paper [(Bougouin et al., 2016)][bougouin-2016]. Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract. Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text. Details about the process can be found in `prmu.py`. ## Content and statistics The dataset contains the following test split: | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen | | :--------- |------------:|-----------:|-------------:|----------:|------------:|--------:|---------:| | Test | 399 | 156.9 | 11.81 | 40.60 | 7.32 | 19.28 | 32.80 | The following data fields are available : - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **keyphrases**: list of reference keyphrases. - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. - **category**: category of the document, i.e. chimie (chemistry), archeologie (archeology), linguistique (linguistics) and scienceInfo (information sciences). ## References - (Bougouin et al., 2016) Adrien Bougouin, Sabine Barreaux, Laurent Romary, Florian Boudin, and Béatrice Daille. 2016. [TermITH-Eval: a French Standard-Based Resource for Keyphrase Extraction Evaluation][bougouin-2016]. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1924–1927, Portorož, Slovenia. European Language Resources Association (ELRA).Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing. - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021]. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. [bougouin-2016]: https://aclanthology.org/L16-1304/ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
cfilt
null
XX
This is the repository for HiNER - a large Hindi Named Entity Recognition dataset.
false
8
false
cfilt/HiNER-collapsed
2022-07-30T12:27:02.000Z
hiner-collapsed-1
false
e9b4d6d480d6f0aa846774ac6437423003577692
[]
[ "arxiv:2204.13743", "annotations_creators:expert-generated", "language_creators:expert-generated", "language:hi", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "task_categories:token-classification", "task_ids:named-entity-recogniti...
https://huggingface.co/datasets/cfilt/HiNER-collapsed/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - hi license: "cc-by-sa-4.0" multilinguality: - monolingual paperswithcode_id: hiner-collapsed-1 pretty_name: HiNER - Large Hindi Named Entity Recognition dataset size_categories: - 100K<n<1M source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition --- <p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p> # Dataset Card for HiNER-original [![Twitter Follow](https://img.shields.io/twitter/follow/cfiltnlp?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/cfiltnlp) [![Twitter Follow](https://img.shields.io/twitter/follow/PeopleCentredAI?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/PeopleCentredAI) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/cfiltnlp/HiNER - **Repository:** https://github.com/cfiltnlp/HiNER - **Paper:** https://arxiv.org/abs/2204.13743 - **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-on-hiner-collapsed - **Point of Contact:** Rudra Murthy V ### Dataset Summary This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy. **Note:** The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset. ### Supported Tasks and Leaderboards Named Entity Recognition ### Languages Hindi ## Dataset Structure ### Data Instances {'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग', 'के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]} ### Data Fields - `id`: The ID value of the data point. - `tokens`: Raw tokens in the dataset. - `ner_tags`: the NER tags for this dataset. ### Data Splits | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | original | 76025 | 10861 | 21722| | collapsed | 76025 | 10861 | 21722| ## About This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available [here](https://arxiv.org/abs/2204.13743). ### Recent Updates * Version 0.0.5: HiNER initial release ## Usage You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip: ```code pip install datasets ``` To use the original dataset with all the tags, please use:<br/> ```python from datasets import load_dataset hiner = load_dataset('cfilt/HiNER-original') ``` To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/> ```python from datasets import load_dataset hiner = load_dataset('cfilt/HiNER-collapsed') ``` However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder. ## Model(s) Our best performing models are hosted on the HuggingFace models repository: 1. [HiNER-Collapsed-XLM-R](https://huggingface.co/cfilt/HiNER-Collapse-XLM-Roberta-Large) 2. [HiNER-Original-XLM-R](https://huggingface.co/cfilt/HiNER-Original-XLM-Roberta-Large) ## Dataset Creation ### Curation Rationale HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing. ### Source Data #### Initial Data Collection and Normalization HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi #### Who are the source language producers? Various Government of India webpages ### Annotations #### Annotation process This dataset was manually annotated by a single annotator of a long span of time. #### Who are the annotators? Pallab Bhattacharjee ### Personal and Sensitive Information We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data. ### Discussion of Biases Any biases contained in the data released by the Indian government are bound to be present in our data. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Pallab Bhattacharjee ### Licensing Information CC-BY-SA 4.0 ### Citation Information ```latex @misc{https://doi.org/10.48550/arxiv.2204.13743, doi = {10.48550/ARXIV.2204.13743}, url = {https://arxiv.org/abs/2204.13743}, author = {Murthy, Rudra and Bhattacharjee, Pallab and Sharnagat, Rahul and Khatri, Jyotsana and Kanojia, Diptesh and Bhattacharyya, Pushpak}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {HiNER: A Large Hindi Named Entity Recognition Dataset}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
surrey-nlp
null
This is the dataset repository for SDU Dataset from SDU workshop at AAAI22. The dataset can help build sequence labelling models for the task Abbreviation Detection.
false
3
false
surrey-nlp/SDU-test
2022-04-24T07:11:10.000Z
null
false
1c2747b56b9f6f1f22dbd7ca543447f6a900fc1a
[]
[ "license:cc-by-sa-4.0" ]
https://huggingface.co/datasets/surrey-nlp/SDU-test/resolve/main/README.md
--- license: cc-by-sa-4.0 ---
taln-ls2n
null
@inproceedings{boudin-etal-2016-document, title = "How Document Pre-processing affects Keyphrase Extraction Performance", author = "Boudin, Florian and Mougard, Hugo and Cram, Damien", booktitle = "Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT})", month = dec, year = "2016", address = "Osaka, Japan", publisher = "The COLING 2016 Organizing Committee", url = "https://aclanthology.org/W16-3917", pages = "121--128", abstract = "The SemEval-2010 benchmark dataset has brought renewed attention to the task of automatic keyphrase extraction. This dataset is made up of scientific articles that were automatically converted from PDF format to plain text and thus require careful preprocessing so that irrevelant spans of text do not negatively affect keyphrase extraction performance. In previous work, a wide range of document preprocessing techniques were described but their impact on the overall performance of keyphrase extraction models is still unexplored. Here, we re-assess the performance of several keyphrase extraction models and measure their robustness against increasingly sophisticated levels of document preprocessing.", }
Preprocessed SemEval-2010 Benchmark dataset for Keyphrase Generation.
false
22
false
taln-ls2n/semeval-2010-pre
2022-09-23T07:37:43.000Z
null
false
c98da16de9bf6c8c09143b61be6079f85bfd1373
[]
[ "annotations_creators:unknown", "language_creators:unknown", "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "task_categories:text-generation", "task_ids:keyphrase-generation", "task_ids:keyphrase-extraction", "size_categories:n<1K" ]
https://huggingface.co/datasets/taln-ls2n/semeval-2010-pre/resolve/main/README.md
--- annotations_creators: - unknown language_creators: - unknown language: - en license: cc-by-4.0 multilinguality: - monolingual task_categories: - text-mining - text-generation task_ids: - keyphrase-generation - keyphrase-extraction size_categories: - n<1K pretty_name: Preprocessed SemEval-2010 Benchmark dataset --- # Preprocessed SemEval-2010 Benchmark dataset for Keyphrase Generation ## About SemEval-2010 is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 244 **full-text** scientific papers collected from the [ACM Digital Library](https://dl.acm.org/). Keyphrases were annotated by readers and combined with those provided by the authors. Details about the SemEval-2010 dataset can be found in the original paper [(kim et al., 2010)][kim-2010]. This version of the dataset was produced by [(Boudin et al., 2016)][boudin-2016] and provides four increasingly sophisticated levels of document preprocessing: * `lvl-1`: default text files provided by the SemEval-2010 organizers. * `lvl-2`: for each file, we manually retrieved the original PDF file from the ACM Digital Library. We then extract the enriched textual content of the PDF files using an Optical Character Recognition (OCR) system and perform document logical structure detection using ParsCit v110505. We use the detected logical structure to remove author-assigned keyphrases and select only relevant elements : title, headers, abstract, introduction, related work, body text and conclusion. We finally apply a systematic dehyphenation at line breaks.s * `lvl-3`: we further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion. * `lvl-4`: we abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique. We keep the title and abstract and select the most content bearing sentences from the remaining contents. Titles and abstracts, collected from the [SciCorefCorpus](https://github.com/melsk125/SciCorefCorpus), are also provided. Details about how they were extracted and cleaned up can be found in [(Chaimongkol et al., 2014)][chaimongkol-2014]. Reference keyphrases are provided in stemmed form (because they were provided like this for the test split in the competition). They are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text. Details about the process can be found in `prmu.py`. The <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and text (lvl-1). ## Content and statistics The dataset is divided into the following two splits: | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen | | :--------- |------------:|-------:|-------------:|----------:|------------:|--------:|---------:| | Train | 144 | 184.6 | 15.44 | 42.16 | 7.36 | 26.85 | 23.63 | | Test | 100 | 203.1 | 14.66 | 40.11 | 8.34 | 27.12 | 24.43 | Statistics (#words, PRMU distributions) are computed using the title/abstract and not the full text of scientific papers. The following data fields are available : - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **lvl-1**: content of the document with no text processing. - **lvl-2**: content of the document retrieved from original PDF files and cleaned up. - **lvl-3**: content of the document further abridged to relevant sections. - **lvl-4**: content of the document further abridged using an unsupervised summarization technique. - **keyphrases**: list of reference keyphrases. - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. ## References - (Kim et al., 2010) Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. [SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles][kim-2010]. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics. - (Chaimongkol et al., 2014) Panot Chaimongkol, Akiko Aizawa, and Yuka Tateisi. 2014. [Corpus for Coreference Resolution on Scientific Papers][chaimongkol-2014]. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3187–3190, Reykjavik, Iceland. European Language Resources Association (ELRA). - (Boudin et al., 2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016. [How Document Pre-processing affects Keyphrase Extraction Performance][boudin-2016]. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 121–128, Osaka, Japan. The COLING 2016 Organizing Committee. - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021]. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. [kim-2010]: https://aclanthology.org/S10-1004/ [chaimongkol-2014]: https://aclanthology.org/L14-1259/ [boudin-2016]: https://aclanthology.org/W16-3917/ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
pietrolesci
null
null
null
false
92
false
pietrolesci/mpe
2022-04-25T09:00:18.000Z
null
false
5bd658aa3bfea14d2c051f1c7dd34b456bbda4a0
[]
[]
https://huggingface.co/datasets/pietrolesci/mpe/resolve/main/README.md
## Overview Original dataset [here](https://github.com/aylai/MultiPremiseEntailment). ## Dataset curation Same data and splits as the original. The following columns have been added: - `premise`: concatenation of `premise1`, `premise2`, `premise3`, and `premise4` - `label`: encoded `gold_label` with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}` ## Code to create the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict from pathlib import Path # read data path = Path("<path to files>") datasets = {} for dataset_path in path.rglob("*.txt"): df = pd.read_csv(dataset_path, sep="\t") datasets[dataset_path.name.split("_")[1].split(".")[0]] = df ds = {} for name, df_ in datasets.items(): df = df_.copy() # fix parsing error for dev split if name == "dev": # fix parsing error df.loc[df["contradiction_judgments"] == "3 contradiction", "contradiction_judgments"] = 3 df.loc[df["gold_label"].isna(), "gold_label"] = "contradiction" # check no nan assert df.isna().sum().sum() == 0 # fix dtypes for col in ("entailment_judgments", "neutral_judgments", "contradiction_judgments"): df[col] = df[col].astype(int) # fix premise column for i in range(1, 4 + 1): df[f"premise{i}"] = df[f"premise{i}"].str.split("/", expand=True)[1] df["premise"] = df[[f"premise{i}" for i in range(1, 4 + 1)]].agg(" ".join, axis=1) # encode labels df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features({ "premise1": Value(dtype="string", id=None), "premise2": Value(dtype="string", id=None), "premise3": Value(dtype="string", id=None), "premise4": Value(dtype="string", id=None), "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "entailment_judgments": Value(dtype="int32"), "neutral_judgments": Value(dtype="int32"), "contradiction_judgments": Value(dtype="int32"), "gold_label": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), }) ds[name] = Dataset.from_pandas(df, features=features) # push to hub ds = DatasetDict(ds) ds.push_to_hub("mpe", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["premise", "hypothesis", "label"], how="inner", ).shape[0], ) #> dev - test: 0 #> dev - train: 0 #> test - train: 0 ```
pietrolesci
null
null
null
false
2
false
pietrolesci/add_one_rte
2022-04-25T08:48:42.000Z
null
false
a5bdde974239556a20e6fc1624c2e32ee20b0c6a
[]
[]
https://huggingface.co/datasets/pietrolesci/add_one_rte/resolve/main/README.md
## Overview Original data available [here](http://www.seas.upenn.edu/~nlp/resources/AN-composition.tgz). ## Dataset curation `premise` and `hypothesis` columns have been cleaned following common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/add_one_rte.py#L51-L52), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_add_1_rte.py#L31-L32)), that is - remove HTML tags `<b>`, `<u>`, `</b>`, `</u>` - normalize repeated white spaces - strip `mean_human_score` has been transformed into class labels following common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/add_one_rte.py#L20-L35), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_add_1_rte.py#L6-L17)), that is - for test set: `mean_human_score <= 3 -> "not-entailed"` and `mean_human_score >= 4 -> "entailed"` (anything between 3 and 4 has been removed) - for all other splits: `mean_human_score < 3.5 -> "not-entailed"` else `"entailed"` more details below. ## Code to generate the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict def convert_label(score, is_test): if is_test: if score <= 3: return "not-entailed" elif score >= 4: return "entailed" return "REMOVE" if score < 3.5: return "not-entailed" return "entailed" ds = {} for split in ("dev", "test", "train"): # read data df = pd.read_csv(f"<path to folder>/AN-composition/addone-entailment/splits/data.{split}", sep="\t", header=None) df.columns = ["mean_human_score", "binary_label", "sentence_id", "adjective", "noun", "premise", "hypothesis"] # clean text from html tags and useless spaces for col in ("premise", "hypothesis"): df[col] = ( df[col] .str.replace("(<b>)|(<u>)|(</b>)|(</u>)", " ", regex=True) .str.replace(" {2,}", " ", regex=True) .str.strip() ) # encode labels if split == "test": df["label"] = df["mean_human_score"].map(lambda x: convert_label(x, True)) df = df.loc[df["label"] != "REMOVE"] else: df["label"] = df["mean_human_score"].map(lambda x: convert_label(x, False)) assert df["label"].isna().sum() == 0 df["label"] = df["label"].map({"not-entailed": 0, "entailed": 1}) # cast to dataset features = Features({ "mean_human_score": Value(dtype="float32"), "binary_label": Value(dtype="string"), "sentence_id": Value(dtype="string"), "adjective": Value(dtype="string"), "noun": Value(dtype="string"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]), }) ds[split] = Dataset.from_pandas(df, features=features) ds = DatasetDict(ds) ds.push_to_hub("add_one_rte", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["premise", "hypothesis", "label"], how="inner", ).shape[0], ) #> dev - test: 0 #> dev - train: 0 #> test - train: 0 ```
AndresPitta
null
null
null
false
2
false
AndresPitta/sg-reports_labeled
2022-10-25T10:08:57.000Z
null
false
c021bbdca0b644116166a56119e2adf49e575647
[]
[ "annotations_creators:expert-generated", "language_creators:machine-generated", "language:en-US", "license:unknown", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "task_categories:text-classification", "task_ids:multi-class-classification" ]
https://huggingface.co/datasets/AndresPitta/sg-reports_labeled/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - machine-generated language: - en-US license: - unknown multilinguality: - monolingual pretty_name: Gender language in the reports of the secretary general 2020-2021 size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact: Andrés Pitta: andres.pitta@un.org** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
pietrolesci
null
null
null
false
3
false
pietrolesci/recast_white
2022-04-22T15:34:14.000Z
null
false
49f76692fb17d5f51bfff93c80276ba700010005
[]
[]
https://huggingface.co/datasets/pietrolesci/recast_white/resolve/main/README.md
## Overview This dataset has been introduced by "Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework", Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme. IJCNLP, 2017. Original data available [here](https://github.com/decompositional-semantics-initiative/DNC/raw/master/inference_is_everything.zip). ## Dataset curation The following processing is applied - `hypothesis_grammatical` and `judgement_valid` columns are filled with `""` when empty - all columns are stripped - the `entailed` column is renamed `label` - `label` column is encoded with the following mapping `{"not-entailed": 0, "entailed": 1}` - columns `rating` and `good_word` are dropped from `fnplus` dataset ## Code to generate the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict ds = {} for name in ("fnplus", "sprl", "dpr"): # read data with open(f"<path to files>/{name}_data.txt", "r") as f: data = f.read() data = data.split("\n\n") data = [lines.split("\n") for lines in data] data = [dict([col.split(":", maxsplit=1) for col in line if len(col) > 0]) for line in data] df = pd.DataFrame(data) # fill empty hypothesis_grammatical and judgement_valid df["hypothesis_grammatical"] = df["hypothesis_grammatical"].fillna("") df["judgement_valid"] = df["judgement_valid"].fillna("") # fix dtype df["index"] = df["index"].astype(int) # strip for col in df.select_dtypes(object).columns: df[col] = df[col].str.strip() # rename columns df = df.rename(columns={"entailed": "label"}) # encode labels df["label"] = df["label"].map({"not-entailed": 0, "entailed": 1}) # cast to dataset features = Features({ "provenance": Value(dtype="string", id=None), "index": Value(dtype="int64", id=None), "text": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "partof": Value(dtype="string", id=None), "hypothesis_grammatical": Value(dtype="string", id=None), "judgement_valid": Value(dtype="string", id=None), "label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]), }) # select common columns df = df.loc[:, list(features.keys())] ds[name] = Dataset.from_pandas(df, features=features) ds = DatasetDict(ds) ds.push_to_hub("recast_white", token="<token>") ```
Pavithree
null
null
null
false
2
false
Pavithree/askHistorians
2022-04-22T16:22:10.000Z
null
false
9603afe1e507fdc70f80ab3c532872fb217c7cc5
[]
[]
https://huggingface.co/datasets/Pavithree/askHistorians/resolve/main/README.md
This dataset is the subset of original eli5 dataset from hugging face.
Pavithree
null
null
null
false
1
false
Pavithree/askScience
2022-04-22T16:45:27.000Z
null
false
9372640c3a19eeae1396f9137339a8081fe38caa
[]
[]
https://huggingface.co/datasets/Pavithree/askScience/resolve/main/README.md
This dataset is derived from the eli5 dataset vailable on hugging face.
deancgarcia
null
null
null
false
85
false
deancgarcia/Diversity
2022-06-29T21:12:55.000Z
null
false
7c6fac1d3cf37929876def373f38bbbbfb337a16
[]
[]
https://huggingface.co/datasets/deancgarcia/Diversity/resolve/main/README.md
[Needs More Information] # Dataset Card for dei_article_sentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Diversity Equity and Inclusion related article title, content, url, sentiment and basis. Basis is a term I use to describe the underline topic related to diveristy I have four at the moment 1 = Gender, 2 = Race, 3 = Disability and 4 = Other. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields ID Title Content Basis URL Sentiment ### Data Splits train validate ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
ntcuong777
null
null
null
false
2
false
ntcuong777/iuontology
2022-04-23T14:49:22.000Z
null
false
765f4ff12812f047f92bd417ed64e5578436ebfe
[]
[]
https://huggingface.co/datasets/ntcuong777/iuontology/resolve/main/README.md
# Dataset Card for [IU Ontology Trahsed] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/ntcuong777) for adding this dataset.
johnowhitaker
null
null
null
false
5
false
johnowhitaker/vqgan1024_reconstruction
2022-04-23T12:50:13.000Z
null
false
ef89c8242e095980a51c2264b0439ef0920ff2b1
[]
[]
https://huggingface.co/datasets/johnowhitaker/vqgan1024_reconstruction/resolve/main/README.md
VQGAN is great, but leaves artifacts that are especially visible around things like faces. It's be great to be able to train a model to fix ('devqganify') these flaws. For this purpose, I've made this dataset, which contains 100k examples, each with - A 512px image - A smaller 256px version of the same image - A reconstructed version, which is made by encoding the 256px image with VQGAN (f16, 1024 version from https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92, one of the ones from taming-transformers) and then decoding the result. The idea is to train a model to go from the 256px vqgan output back to something as close to the original image as possible, or even to try and output an up-scaled 512px version for extra points. Let me know what you come up with :) Usage: ```python from datasets import load_dataset dataset = load_dataset('johnowhitaker/vqgan1024_reconstruction') dataset['train'][0]['image_256'] # Original image dataset['train'][0]['reconstruction_256'] # Reconstructed version ```` Approximate code used to prepare this data: https://colab.research.google.com/drive/1AXzlRMvAIE6krkpFwFnFr2c5SnOsygf-?usp=sharing (let me know if you hit issues) I'll be making a similar dataset with other VQGAN variants and posting progress on devqganify models soon, feel free to get in touch for more info (@johnowhitaker)
Neku
null
null
null
false
3
false
Neku/meme
2022-04-23T06:37:43.000Z
null
false
62371637e4c902138b1a813028c29d509b875084
[]
[ "license:artistic-2.0" ]
https://huggingface.co/datasets/Neku/meme/resolve/main/README.md
--- license: artistic-2.0 ---
Pavithree
null
null
null
false
2
false
Pavithree/eli5_split
2022-04-23T08:33:53.000Z
null
false
45fcb031e0510483c13d10b6557aae26fc85df52
[]
[]
https://huggingface.co/datasets/Pavithree/eli5_split/resolve/main/README.md
This dataset is the subset of original eli5 dataset available in hugging face space
dnaveenr
null
null
null
false
3
false
dnaveenr/cmu_mocap
2022-04-24T11:33:25.000Z
null
false
9bcb69f0dcc08b2097900c96c7f1332276aede6e
[]
[ "license:other" ]
https://huggingface.co/datasets/dnaveenr/cmu_mocap/resolve/main/README.md
--- license: other ---
AliceTears
null
null
null
false
3
false
AliceTears/thanadol_sin
2022-04-23T10:37:28.000Z
null
false
d616736b70abaddf043ab517649e367b0d2bb20c
[]
[]
https://huggingface.co/datasets/AliceTears/thanadol_sin/resolve/main/README.md
johnowhitaker
null
null
null
false
1
false
johnowhitaker/vqgan1024_encs_sf
2022-04-23T16:22:37.000Z
null
false
a4060f6c30fac71147c6f424fd6adb3b0b753f59
[]
[]
https://huggingface.co/datasets/johnowhitaker/vqgan1024_encs_sf/resolve/main/README.md
Images from CC12M encoded with VQGAN f16 1024 Script to continue prep is included in the repo if you want more than the ~1.5M images I did here. VQGAN model: ``` !curl -L 'https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/files/?p=%2Fckpts%2Flast.ckpt&dl=1' > vqgan_im1024.ckpt !curl -L 'https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' > vqgan_im1024.yaml ``` Try it out: TODO
qanastek
null
@misc{fitzgerald2022massive, title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages}, author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan}, year={2022}, eprint={2204.08582}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{bastianelli-etal-2020-slurp, title = "{SLURP}: A Spoken Language Understanding Resource Package", author = "Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.emnlp-main.588", doi = "10.18653/v1/2020.emnlp-main.588", pages = "7252--7262", abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp." }
MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
false
291
false
qanastek/MASSIVE
2022-10-25T05:03:05.000Z
null
false
70bf52d65f55a34b4901c3d93e29566af2fb2dcf
[]
[ "arxiv:2204.08582", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:found", "language:af", "language:am", "language:ar", "language:az", "language:bn", "language:cy", "language:da", "language:de", "language:el", "language:en", "languag...
https://huggingface.co/datasets/qanastek/MASSIVE/resolve/main/README.md
--- annotations_creators: - machine-generated - expert-generated language_creators: - found language: - af - am - ar - az - bn - cy - da - de - el - en - es - fa - fi - fr - he - hi - hu - hy - id - is - it - ja - jv - ka - km - kn - ko - lv - ml - mn - ms - my - nb - nl - pl - pt - ro - ru - sl - sq - sv - sw - ta - te - th - tl - tr - ur - vi - zh - zh multilinguality: - multilingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - intent-classification - multi-class-classification - natural-language-understanding pretty_name: MASSIVE language_bcp47: - af-ZA - am-ET - ar-SA - az-AZ - bn-BD - cy-GB - da-DK - de-DE - el-GR - en-US - es-ES - fa-IR - fi-FI - fr-FR - he-IL - hi-IN - hu-HU - hy-AM - id-ID - is-IS - it-IT - ja-JP - jv-ID - ka-GE - km-KH - kn-IN - ko-KR - lv-LV - ml-IN - mn-MN - ms-MY - my-MM - nb-NO - nl-NL - pl-PL - pt-PT - ro-RO - ru-RU - sl-SL - sq-AL - sv-SE - sw-KE - ta-IN - te-IN - th-TH - tl-PH - tr-TR - ur-PK - vi-VN - zh-CN - zh-TW --- # MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages ## Table of Contents - [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [No Warranty](#no-warranty) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/alexa/massive - **Repository:** https://github.com/alexa/massive - **Paper:** https://arxiv.org/abs/2204.08582 - **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1697/overview - **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues) ### Dataset Summary MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions. | Name | Lang | Utt/Lang | Domains | Intents | Slots | |:-------------------------------------------------------------------------------:|:-------:|:--------------:|:-------:|:--------:|:------:| | MASSIVE | 51 | 19,521 | 18 | 60 | 55 | | SLURP (Bastianelli et al., 2020) | 1 | 16,521 | 18 | 60 | 55 | | NLU Evaluation Data (Liu et al., 2019) | 1 | 25,716 | 18 | 54 | 56 | | Airline Travel Information System (ATIS) (Price, 1990) | 1 | 5,871 | 1 | 26 | 129 | | ATIS with Hindi and Turkish (Upadhyay et al., 2018) | 3 | 1,315-5,871 | 1 | 26 | 129 | | MultiATIS++ (Xu et al., 2020) | 9 | 1,422-5,897 | 1 | 21-26 | 99-140 | | Snips (Coucke et al., 2018) | 1 | 14,484 | - | 7 | 53 | | Snips with French (Saade et al., 2019) | 2 | 4,818 | 2 | 14-15 | 11-12 | | Task Oriented Parsing (TOP) (Gupta et al., 2018) | 1 | 44,873 | 2 | 25 | 36 | | Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) | 6 | 15,195-22,288 | 11 | 104-113 | 72-75 | | Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) | 3 | 5,083-43,323 | 3 | 12 | 11 | | Microsoft Dialog Challenge (Li et al., 2018) | 1 | 38,276 | 3 | 11 | 29 | | Fluent Speech Commands (FSC) (Lugosch et al., 2019) | 1 | 30,043 | - | 31 | - | | Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) | 1 | 16,258 | 4 | - | 94 | ### Supported Tasks and Leaderboards The dataset can be used to train a model for `natural-language-understanding` (NLU) : - `intent-classification` - `multi-class-classification` - `natural-language-understanding` ### Languages The corpora consists of parallel sentences from 51 languages : - `Afrikaans - South Africa (af-ZA)` - `Amharic - Ethiopia (am-ET)` - `Arabic - Saudi Arabia (ar-SA)` - `Azeri - Azerbaijan (az-AZ)` - `Bengali - Bangladesh (bn-BD)` - `Chinese - China (zh-CN)` - `Chinese - Taiwan (zh-TW)` - `Danish - Denmark (da-DK)` - `German - Germany (de-DE)` - `Greek - Greece (el-GR)` - `English - United States (en-US)` - `Spanish - Spain (es-ES)` - `Farsi - Iran (fa-IR)` - `Finnish - Finland (fi-FI)` - `French - France (fr-FR)` - `Hebrew - Israel (he-IL)` - `Hungarian - Hungary (hu-HU)` - `Armenian - Armenia (hy-AM)` - `Indonesian - Indonesia (id-ID)` - `Icelandic - Iceland (is-IS)` - `Italian - Italy (it-IT)` - `Japanese - Japan (ja-JP)` - `Javanese - Indonesia (jv-ID)` - `Georgian - Georgia (ka-GE)` - `Khmer - Cambodia (km-KH)` - `Korean - Korea (ko-KR)` - `Latvian - Latvia (lv-LV)` - `Mongolian - Mongolia (mn-MN)` - `Malay - Malaysia (ms-MY)` - `Burmese - Myanmar (my-MM)` - `Norwegian - Norway (nb-NO)` - `Dutch - Netherlands (nl-NL)` - `Polish - Poland (pl-PL)` - `Portuguese - Portugal (pt-PT)` - `Romanian - Romania (ro-RO)` - `Russian - Russia (ru-RU)` - `Slovanian - Slovania (sl-SL)` - `Albanian - Albania (sq-AL)` - `Swedish - Sweden (sv-SE)` - `Swahili - Kenya (sw-KE)` - `Hindi - India (hi-IN)` - `Kannada - India (kn-IN)` - `Malayalam - India (ml-IN)` - `Tamil - India (ta-IN)` - `Telugu - India (te-IN)` - `Thai - Thailand (th-TH)` - `Tagalog - Philippines (tl-PH)` - `Turkish - Turkey (tr-TR)` - `Urdu - Pakistan (ur-PK)` - `Vietnamese - Vietnam (vi-VN)` - `Welsh - United Kingdom (cy-GB)` ## Load the dataset with HuggingFace ```python from datasets import load_dataset dataset = load_dataset("qanastek/MASSIVE", "en-US", split='train') print(dataset) print(dataset[0]) ``` ## Dataset Structure ### Data Instances ```json { "id": "1", "locale": "fr-FR", "partition": "train", "scenario": 16, "intent": 48, "utt": "réveille-moi à neuf heures du matin le vendredi", "annot_utt": "réveille-moi à [time : neuf heures du matin] le [date : vendredi]", "tokens": [ "réveille-moi", "à", "neuf", "heures", "du", "matin", "le", "vendredi" ], "ner_tags": [0, 0, 71, 6, 6, 6, 0, 14], "worker_id": "22", "slot_method": { "slot": ["time", "date"], "method": ["translation", "translation"] }, "judgments": { "worker_id": ["11", "22", "0"], "intent_score": [2, 1, 1], "slots_score": [1, 1, 1], "grammar_score": [3, 4, 4], "spelling_score": [2, 2, 2], "language_identification": ["target", "target", "target"] } } ``` ### Data Fields (taken from Alexa Github) `id`: maps to the original ID in the [SLURP](https://github.com/pswietojanski/slurp) collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization. `locale`: is the language and country code accoring to ISO-639-1 and ISO-3166. `partition`: is either `train`, `dev`, or `test`, according to the original split in [SLURP](https://github.com/pswietojanski/slurp). `scenario`: is the general domain, aka "scenario" in SLURP terminology, of an utterance `intent`: is the specific intent of an utterance within a domain formatted as `{scenario}_{intent}` `utt`: the raw utterance text without annotations `annot_utt`: the text from `utt` with slot annotations formatted as `[{label} : {entity}]` `worker_id`: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales. `slot_method`: for each slot in the utterance, whether that slot was a `translation` (i.e., same expression just in the target language), `localization` (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or `unchanged` (i.e., the original en-US slot value was copied over without modification). `judgments`: Each judgment collected for the localized utterance has 6 keys. `worker_id` is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker. ```plain intent_score : "Does the sentence match the intent?" 0: No 1: Yes 2: It is a reasonable interpretation of the goal slots_score : "Do all these terms match the categories in square brackets?" 0: No 1: Yes 2: There are no words in square brackets (utterance without a slot) grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?" 0: Completely unnatural (nonsensical, cannot be understood at all) 1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language) 2: Some errors (the meaning can be understood but it doesn't sound natural in your language) 3: Good enough (easily understood and sounds almost natural in your language) 4: Perfect (sounds natural in your language) spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error." 0: There are more than 2 spelling errors 1: There are 1-2 spelling errors 2: All words are spelled correctly language_identification : "The following sentence contains words in the following languages (check all that apply)" 1: target 2: english 3: other 4: target & english 5: target & other 6: english & other 7: target & english & other ``` ### Data Splits |Language|Train|Dev|Test| |:---:|:---:|:---:|:---:| |af-ZA|11514|2033|2974| |am-ET|11514|2033|2974| |ar-SA|11514|2033|2974| |az-AZ|11514|2033|2974| |bn-BD|11514|2033|2974| |cy-GB|11514|2033|2974| |da-DK|11514|2033|2974| |de-DE|11514|2033|2974| |el-GR|11514|2033|2974| |en-US|11514|2033|2974| |es-ES|11514|2033|2974| |fa-IR|11514|2033|2974| |fi-FI|11514|2033|2974| |fr-FR|11514|2033|2974| |he-IL|11514|2033|2974| |hi-IN|11514|2033|2974| |hu-HU|11514|2033|2974| |hy-AM|11514|2033|2974| |id-ID|11514|2033|2974| |is-IS|11514|2033|2974| |it-IT|11514|2033|2974| |ja-JP|11514|2033|2974| |jv-ID|11514|2033|2974| |ka-GE|11514|2033|2974| |km-KH|11514|2033|2974| |kn-IN|11514|2033|2974| |ko-KR|11514|2033|2974| |lv-LV|11514|2033|2974| |ml-IN|11514|2033|2974| |mn-MN|11514|2033|2974| |ms-MY|11514|2033|2974| |my-MM|11514|2033|2974| |nb-NO|11514|2033|2974| |nl-NL|11514|2033|2974| |pl-PL|11514|2033|2974| |pt-PT|11514|2033|2974| |ro-RO|11514|2033|2974| |ru-RU|11514|2033|2974| |sl-SL|11514|2033|2974| |sq-AL|11514|2033|2974| |sv-SE|11514|2033|2974| |sw-KE|11514|2033|2974| |ta-IN|11514|2033|2974| |te-IN|11514|2033|2974| |th-TH|11514|2033|2974| |tl-PH|11514|2033|2974| |tr-TR|11514|2033|2974| |ur-PK|11514|2033|2974| |vi-VN|11514|2033|2974| |zh-CN|11514|2033|2974| |zh-TW|11514|2033|2974| ## Dataset Creation ### Source Data #### Who are the source language producers? The corpus has been produced and uploaded by Amazon Alexa. ### Personal and Sensitive Information The corpora is free of personal or sensitive information. ## Additional Information ### Dataset Curators __MASSIVE__: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan. __SLURP__: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena. __Hugging Face__: Labrak Yanis (Not affiliated with the original corpus) ### Licensing Information ```plain Copyright Amazon.com Inc. or its affiliates. Attribution 4.0 International ======================================================================= Creative Commons Corporation ("Creative Commons") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an "as-is" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible. Using Creative Commons Public Licenses Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses. Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC- licensed material, or material used under an exception or limitation to copyright. More considerations for licensors: wiki.creativecommons.org/Considerations_for_licensors Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor's permission is not necessary for any reason--for example, because of any applicable exception or limitation to copyright--then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More considerations for the public: wiki.creativecommons.org/Considerations_for_licensees ======================================================================= Creative Commons Attribution 4.0 International Public License By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. Section 1 -- Definitions. a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. c. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. d. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. e. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License. g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. h. Licensor means the individual(s) or entity(ies) granting rights under this Public License. i. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. j. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. k. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. Section 2 -- Scope. a. License grant. 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: a. reproduce and Share the Licensed Material, in whole or in part; and b. produce, reproduce, and Share Adapted Material. 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. 3. Term. The term of this Public License is specified in Section 6(a). 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a) (4) never produces Adapted Material. 5. Downstream recipients. a. Offer from the Licensor -- Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. b. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). b. Other rights. 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. 2. Patent and trademark rights are not licensed under this Public License. 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties. Section 3 -- License Conditions. Your exercise of the Licensed Rights is expressly made subject to the following conditions. a. Attribution. 1. If You Share the Licensed Material (including in modified form), You must: a. retain the following if it is supplied by the Licensor with the Licensed Material: i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); ii. a copyright notice; iii. a notice that refers to this Public License; iv. a notice that refers to the disclaimer of warranties; v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; b. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and c. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. 4. If You Share Adapted Material You produce, the Adapter's License You apply must not prevent recipients of the Adapted Material from complying with this Public License. Section 4 -- Sui Generis Database Rights. Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database; b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. Section 5 -- Disclaimer of Warranties and Limitation of Liability. a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. Section 6 -- Term and Termination. a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or 2. upon express reinstatement by the Licensor. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. Section 7 -- Other Terms and Conditions. a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. Section 8 -- Interpretation. a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. ======================================================================= Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark "Creative Commons" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses. Creative Commons may be contacted at creativecommons.org. ``` ### Citation Information Please cite the following paper when using this dataset. ```latex @misc{fitzgerald2022massive, title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages}, author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan}, year={2022}, eprint={2204.08582}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{bastianelli-etal-2020-slurp, title = "{SLURP}: A Spoken Language Understanding Resource Package", author = "Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.emnlp-main.588", doi = "10.18653/v1/2020.emnlp-main.588", pages = "7252--7262", abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp." } ```
johnowhitaker
null
null
null
false
2
false
johnowhitaker/vqgan16k_reconstruction
2022-04-24T06:13:26.000Z
null
false
5ccd054e794667994e2fd3b6a5ff01bed70f9acf
[]
[]
https://huggingface.co/datasets/johnowhitaker/vqgan16k_reconstruction/resolve/main/README.md
VQGAN is great, but leaves artifacts that are especially visible around things like faces. It's be great to be able to train a model to fix ('devqganify') these flaws. For this purpose, I've made this dataset, which contains >100k examples, each with - A 512px image - A smaller 256px version of the same image - A reconstructed version, which is made by encoding the 256px image with VQGAN (f16, 16384 imagenet version from https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/) and then decoding the result. The idea is to train a model to go from the 256px vqgan output back to something as close to the original image as possible, or even to try and output an up-scaled 512px version for extra points. Let me know what you come up with :) Usage: ```python from datasets import load_dataset dataset = load_dataset('johnowhitaker/vqgan1024_reconstruction') dataset['train'][0]['image_256'] # Original image dataset['train'][0]['reconstruction_256'] # Reconstructed version ```` Approximate code used to prepare this data (vqgan model was changed for this version): https://colab.research.google.com/drive/1AXzlRMvAIE6krkpFwFnFr2c5SnOsygf-?usp=sharing (let me know if you hit issues) The VQGAN model used for this version: https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/ See also: https://huggingface.co/datasets/johnowhitaker/vqgan1024_reconstruction (same idea but vqgan with smaller vocab size of 1024)
chrishuber
null
null
null
false
99
false
chrishuber/kaggle_mnli
2022-04-23T19:19:52.000Z
null
false
ebe02645e5511e32c87c79746a75dc2d45bae062
[]
[ "arxiv:1704.05426" ]
https://huggingface.co/datasets/chrishuber/kaggle_mnli/resolve/main/README.md
# Dataset Card for [Kaggle MNLI] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://www.kaggle.com/c/multinli-matched-open-evaluation ** - **Repository: chrishuber/roberta-retrained-mlni ** - **Paper: Inference Detection in NLP Using the MultiNLI and SNLI Datasets** - **Leaderboard: 8** - **Point of Contact: chrish@sfsu.edu** ### Dataset Summary [These are the datasets posted to Kaggle for an inference detection NLP competition. Moving them here to use with Pytorch.] ### Supported Tasks and Leaderboards Provides train and validation data for sentence pairs with inference labels. [https://www.kaggle.com/competitions/multinli-matched-open-evaluation/leaderboard] [https://www.kaggle.com/competitions/multinli-mismatched-open-evaluation/leaderboard] ### Languages [JSON, Python] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [Reposted from https://www.kaggle.com/c/multinli-matched-open-evaluation and https://www.kaggle.com/c/multinli-mismatched-open-evaluation] ### Source Data #### Initial Data Collection and Normalization [Please see the article at https://arxiv.org/abs/1704.05426 which discusses the creation of the MNLI dataset.] #### Who are the source language producers? [Please see the article at https://arxiv.org/abs/1704.05426 which discusses the creation of the MNLI dataset.] ### Annotations #### Annotation process [Crowdsourcing using MechanicalTurk.] #### Who are the annotators? [MechanicalTurk users.] ### Personal and Sensitive Information [None.] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [Kaggle] ### Licensing Information [More Information Needed] ### Citation Information [https://www.kaggle.com/c/multinli-matched-open-evaluation] [https://www.kaggle.com/c/multinli-mismatched-open-evaluation] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
IDKiro
null
null
null
false
1
false
IDKiro/RS-Haze
2022-04-24T03:13:23.000Z
null
false
097c993ccff5f61f4c0cfc30e09203e60f214e6a
[]
[ "license:gpl-3.0" ]
https://huggingface.co/datasets/IDKiro/RS-Haze/resolve/main/README.md
--- license: gpl-3.0 ---
d0r1h
null
null
null
false
3
false
d0r1h/Shlokam
2022-10-25T10:09:04.000Z
null
false
ac3f65840a512ce745231e9d6339c2bc83e61582
[]
[ "annotations_creators:found", "language_creators:found", "language:sn", "language:en", "license:cc-by-3.0", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:original" ]
https://huggingface.co/datasets/d0r1h/Shlokam/resolve/main/README.md
--- annotations_creators: found language_creators: - found language: - sn - en license: cc-by-3.0 multilinguality: - translation size_categories: - 1K<n<10K source_datasets: - original pretty_name: Shlokam --- ## Dataset Description - **Homepage:** None - **Repository:** None - **Paper:** None - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed]
McGill-NLP
null
@article{dziri2022faithdial, title={FaithDial: A Faithful Benchmark for Information-Seeking Dialogue}, author={Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo and Reddy, Siva}, journal={arXiv preprint, arXiv:2204.10757}, year={2022}, url={https://arxiv.org/abs/2204.10757} }
FaithDial is a new benchmark for hallucination-free dialogues, created by manually editing hallucinated and uncooperative responses in Wizard of Wikipedia.
false
56
false
McGill-NLP/FaithDial
2022-10-25T05:06:34.000Z
null
false
47a8e1316d167084bc9dbbb5c02a1ccbaa0c32a9
[]
[ "arxiv:2204.10757", "license:mit", "annotations_creators:crowdsourced", "language:en", "multilinguality:monolingual", "size_categories:10K<n<100k", "task_categories:text-generation", "task_categories:conversational", "task_ids:dialogue-modeling", "task_ids:faithful-dialogue-modeling", "task_ids:...
https://huggingface.co/datasets/McGill-NLP/FaithDial/resolve/main/README.md
--- license: - mit annotations_creators: - crowdsourced language: - en multilinguality: - monolingual size_categories: - 10K<n<100k task_categories: - text-generation - conversational task_ids: - dialogue-modeling - faithful-dialogue-modeling - trustworthy-dialogue-modeling pretty_name: A Faithful Benchmark for Information-Seeking Dialogue --- ## Dataset Summary FaithDial is a faithful knowledge-grounded dialogue benchmark, composed of **50,761** turns spanning **5649** conversations. It was curated through Amazon Mechanical Turk by asking annotators to amend hallucinated utterances in [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) (WoW). In our dialogue setting, we simulate interactions between two speakers: **an information seeker** and **a bot wizard**. The seeker has a large degree of freedom as opposed to the wizard bot which is more restricted on what it can communicate. In fact, it must abide by the following rules: - **First**, it should be truthful by providing information that is attributable to the source knowledge *K*. - **Second**, it should provide information conversationally, i.e., use naturalistic phrasing of *K*, support follow-on discussion with questions, and prompt user's opinions. - **Third**, it should acknowledge its ignorance of the answer in those cases where *K* does not include it while still moving the conversation forward using *K*. ## Dataset Description - **Homepage:** [FaithDial](https://mcgill-nlp.github.io/FaithDial/) - **Repository:** [GitHub](https://github.com/McGill-NLP/FaithDial) - **Point of Contact:** [Nouha Dziri](mailto:dziri@ualberta.ca) ## Language English ## Data Instance An example of 'train' looks as follows: ```text [ { "utterances": [ ... // prior utterances, { "history": [ "Have you ever been to a concert? They're so fun!", "No I cannot as a bot. However, have you been to Madonna's? Her 10th concert was used to help her 13th album called \"Rebel Heart\".", "Yeah I've heard of it but never went or what it was for. Can you tell me more about it?" ], "speaker": "Wizard", "knowledge": "It began on September 9, 2015, in Montreal, Canada, at the Bell Centre and concluded on March 20, 2016, in Sydney, Australia at Allphones Arena.", "original_response": "It started in September of 2015 and ran all the way through March of 2016. Can you imagine being on the road that long?", "response": "Sure. The concert started in September 9th of 2015 at Montreal, Canada. It continued till 20th of March of 2016, where it ended at Sydney, Australia.", "BEGIN": [ "Hallucination", "Entailment" ], "VRM": [ "Disclosure", "Question" ] }, ... // more utterances ] }, ... // more dialogues ] ``` If the `original_response` is empty, it means that the response is faithful to the source and we consider it as a FaithDial response. Faithful responses in WoW are also edited slightly if they are found to have some grammatical issues or typos. ## Data Fields - `history`: `List[string]`. The dialogue history. - `knowledge`: `string`. The source knowkedge on which the bot wizard should ground its response. - `speaker`: `string`. The current speaker. - `original response`: `string`. The WoW original response before editing it. - `response`: `string`. The new Wizard response. - `BEGIN`: `List[string]`. The BEGIN labels for the Wizard response. - `VRM`: `List[string]`. The VRM labels for the wizard response. ## Data Splits - `Train`: 36809 turns - `Valid`: 6851 turns - `Test`: 7101 turns `Valid` includes both the `seen` and the `unseen` data splits from WoW. The same applies to `Test`. We also include those splits for FaithDial valid and test data. ## Annotations Following the guidelines for ethical crowdsourcing outlined in [Sheehan. 2018](https://www.tandfonline.com/doi/abs/10.1080/03637751.2017.1342043), we hire Amazon Mechanical Turk (AMT) workers to edit utterances in WoW dialogues that were found to exhibit unfaithful responses. To ensure clarity in the task definition, we provided detailed examples for our terminology. Moreover, we performed several staging rounds over the course of several months. # Who are the annotators? To be eligible for the task, workers have to be located in the United States and Canada and have to answer successfully 20 questions as part of a qualification test. Before launching the main annotation task, we perform a small pilot round (60 HITS) to check the performance of the workers. We email workers who commit errors, providing them with examples on how to fix their mistakes in future HITS. ## Personal and Sensitive Information Seeker utterances in FaithDial may contain personal and sensitive information. ## Social Impact of Dataset In recent years, the conversational AI market has seen a proliferation of a variety of applications—which are powered by large pre-trained LMs—that span across a broad range of domains, such as customer support, education, e-commerce, health, entertainment, etc. Ensuring that these systems are trustworthy is key to deploy systems safely at a large scale in real-world application, especially in high-stake domain. FaithDial holds promise to encourage faithfulness in information-seeking dialogue and make virtual assistants both safer and more reliable. ## Licensing Information MIT ## Citation Information ```bibtex @article{dziri2022faithdial, title={FaithDial: A Faithful Benchmark for Information-Seeking Dialogue}, author={Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo and Reddy, Siva}, journal={arXiv preprint, arXiv:2204.10757}, year={2022}, url={https://arxiv.org/abs/2204.10757} } ```
Fhrozen
null
null
null
false
2
false
Fhrozen/tau_srir_db
2022-05-06T15:43:42.000Z
null
false
2191bab9142048702b9dc42b96a632237f35c993
[]
[ "license:unknown", "annotations_creators:unknown", "language_creators:unknown", "size_categories:n<1K", "source_datasets:unknown", "task_categories:audio-classification", "task_ids:other-audio-slot-filling" ]
https://huggingface.co/datasets/Fhrozen/tau_srir_db/resolve/main/README.md
--- license: unknown annotations_creators: - unknown language_creators: - unknown size_categories: - n<1K source_datasets: - unknown task_categories: - audio-classification task_ids: - other-audio-slot-filling --- # TAU Spatial Room Impulse Response Database (TAU-SRIR DB) ## Important **This is a copy from the Zenodo Original one** ## Description [Audio Research Group / Tampere University](https://webpages.tuni.fi/arg/) AUTHORS **Tampere University** - Archontis Politis ([contact](mailto:archontis.politis@tuni.fi), [profile](https://scholar.google.fi/citations?user=DuCqB3sAAAAJ&hl=en)) - Sharath Adavanne ([contact](mailto:sharath.adavanne@tuni.fi), [profile](https://www.aane.in)) - Tuomas Virtanen ([contact](mailto:tuomas.virtanen@tuni.fi), [profile](https://homepages.tuni.fi/tuomas.virtanen/)) **Data Collection 2019-2020** - Archontis Politis - Aapo Hakala - Ali Gohar **Data Collection 2017-2018** - Sharath Adavanne - Aapo Hakala - Eemi Fagerlund - Aino Koskimies The **TAU Spatial Room Impulse Response Database (TAU-SRIR DB)** database contains spatial room impulse responses (SRIRs) captured in various spaces of Tampere University (TAU), Finland, for a fixed receiver position and multiple source positions per room, along with separate recordings of spatial ambient noise captured at the same recording point. The dataset is intended for emulation of spatial multichannel recordings for evaluation and/or training of multichannel processing algorithms in realistic reverberant conditions and over multiple rooms. The major distinct properties of the database compared to other databases of room impulse responses are: - Capturing in a high resolution multichannel format (32 channels) from which multiple more limited application-specific formats can be derived (e.g. tetrahedral array, circular array, first-order Ambisonics, higher-order Ambisonics, binaural). - Extraction of densely spaced SRIRs along measurement trajectories, allowing emulation of moving source scenarios. - Multiple source distances, azimuths, and elevations from the receiver per room, allowing emulation of complex configurations for multi-source methods. - Multiple rooms, allowing evaluation of methods at various acoustic conditions, and training of methods with the aim of generalization on different rooms. The RIRs were collected by staff of TAU between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The data collection received funding from the European Research Council, grant agreement [637422 EVERYSOUND](https://cordis.europa.eu/project/id/637422). [![ERC](https://erc.europa.eu/sites/default/files/content/erc_banner-horizontal.jpg "ERC")](https://erc.europa.eu/) > **NOTE**: This database is a work-in-progress. We intend to publish additional rooms, additional formats, and potentially higher-fidelity versions of the captured responses in the near future, as new versions of the database in this repository. ## Report and reference A compact description of the dataset, recording setup, recording procedure, and extraction can be found in: >Politis., Archontis, Adavanne, Sharath, & Virtanen, Tuomas (2020). **A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection**. In _Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020)_, Tokyo, Japan. available [here](https://dcase.community/documents/workshop2020/proceedings/DCASE2020Workshop_Politis_88.pdf). A more detailed report specifically focusing on the dataset collection and properties will follow. ## Aim The dataset can be used for generating multichannel or monophonic mixtures for testing or training of methods under realistic reverberation conditions, related to e.g. multichannel speech enhancement, acoustic scene analysis, and machine listening, among others. It is especially suitable for the follow application scenarios: - monophonic and multichannal reverberant single- or multi-source speech in multi-room reverberant conditions, - monophonic and multichannel polyphonic sound events in multi-room reverberant conditions, - single-source and multi-source localization in multi-room reverberant conditions, in static or dynamic scenarios, - single-source and multi-source tracking in multi-room reverberant conditions, in static or dynamic scenarios, - sound event localization and detection in multi-room reverberant conditions, in static or dynamic scenarios. ## Specifications The SRIRs were captured using an [Eigenmike](https://mhacoustics.com/products) spherical microphone array. A [Genelec G Three loudspeaker](https://www.genelec.com/g-three) was used to playback a maximum length sequence (MLS) around the Eigenmike. The SRIRs were obtained in the STFT domain using a least-squares regression between the known measurement signal (MLS) and far-field recording independently at each frequency. In this version of the dataset the SRIRs and ambient noise are downsampled to 24kHz for compactness. The currently published SRIR set was recorded at nine different indoor locations inside the Tampere University campus at Hervanta, Finland. Additionally, 30 minutes of ambient noise recordings were collected at the same locations with the IR recording setup unchanged. SRIR directions and distances differ with the room. Possible azimuths span the whole range of $\phi\in[-180,180)$, while the elevations span approximately a range between $\theta\in[-45,45]$ degrees. The currently shared measured spaces are as follows: 1. Large open space in underground bomb shelter, with plastic-coated floor and rock walls. Ventilation noise. 2. Large open gym space. Ambience of people using weights and gym equipment in adjacent rooms. 3. Small classroom (PB132) with group work tables and carpet flooring. Ventilation noise. 4. Meeting room (PC226) with hard floor and partially glass walls. Ventilation noise. 5. Lecture hall (SA203) with inclined floor and rows of desks. Ventilation noise. 6. Small classroom (SC203) with group work tables and carpet flooring. Ventilation noise. 7. Large classroom (SE203) with hard floor and rows of desks. Ventilation noise. 8. Lecture hall (TB103) with inclined floor and rows of desks. Ventilation noise. 9. Meeting room (TC352) with hard floor and partially glass walls. Ventilation noise. The measurement trajectories were organized in groups, with each group being specified by a circular or linear trace at the floor at a certain distance (range) from the z-axis of the microphone. For circular trajectories two ranges were measured, a _close_ and a _far_ one, except room TC352, where the same range was measured twice, but with different furniture configuration and open or closed doors. For linear trajectories also two ranges were measured, _close_ and _far_, but with linear paths at either side of the array, resulting in 4 unique trajectory groups, with the exception of room SA203 where 3 ranges were measurd resulting on 6 trajectory groups. Linear trajectory groups are always parallel to each other, in the same room. Each trajectory group had multiple measurement trajectories, following the same floor path, but with the source at different heights. The SRIRs are extracted from the noise recordings of the slowly moving source across those trajectories, at an angular spacing of approximately every 1 degree from the microphone. This extraction scheme instead of extracting SRIRs at equally spaced points along the path (e.g. every 20cm) was found more practical for synthesis purposes, making emulation of moving sources at an approximately constant angular speed easier. The following table summarizes the above properties for the currently available rooms: | | Room name | Room type | Traj. type | # ranges | # trajectory groups | # heights/group | # trajectories (total) | # RIRs/DOAs | |---|--------------------------|----------------------------|------------|-------------|-----------------------|---------------------|------------------------|-------------| | 1 | Bomb shelter | Complex/semi-open | Circular | 2 | 2 | 9 | 18 | 6480 | | 2 | Gym | Rectangular/large | Circular | 2 | 2 | 9 | 18 | 6480 | | 3 | PB132 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 | | 4 | PC226 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 | | 5 | SA203 Lecture hall | Trapezoidal/large | Linear | 3 | 6 | 3 | 18 | 1594 | | 6 | SC203 Classroom | Rectangular/medium | Linear | 2 | 4 | 5 | 20 | 1592 | | 7 | SE203 Classroom | Rectangular/large | Linear | 2 | 4 | 4 | 16 | 1760 | | 8 | TB103 Classroom | Trapezoidal/large | Linear | 2 | 4 | 3 | 12 | 1184 | | 9 | TC352 Meeting room | Rectangular/small | Circular | 1 | 2 | 9 | 18 | 6480 | More details on the trajectory geometries can be found in the database info file (`measinfo.mat`). ## Recording formats The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $\phi$ and elevation angle $\theta$. **For the first-order ambisonics (FOA):** \begin{eqnarray} H_1(\phi, \theta, f) &=& 1 \\ H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\ H_3(\phi, \theta, f) &=& \sin(\theta) \\ H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta) \end{eqnarray} The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above. Routines that can compute the matrix of encoding filters for spherical and general arrays, based on theoretical array models or measurements, can be found [here](https://github.com/polarch/Spherical-Array-Processing). **For the tetrahedral microphone array (MIC):** The four microphone have the following positions, in spherical coordinates $(\phi, \theta, r)$: \begin{eqnarray} M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\\ M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\ M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\ M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber \end{eqnarray} Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion: \begin{equation} H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m)) \end{equation} where $m$ is the channel number, $(\phi_m, \theta_m)$ are the specific microphone's azimuth and elevation position, $\omega = 2\pi f$ is the angular frequency, $R = 0.042$m is the array radius, $c = 343$m/s is the speed of sound, $\cos(\gamma_m)$ is the cosine angle between the microphone and the DOA, and $P_n$ is the unnormalized Legendre polynomial of degree $n$, and $h_n'^{(2)}$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found [here](https://github.com/polarch/Array-Response-Simulator). ## Reference directions-of-arrival For each extracted RIR across a measurement trajectory there is a direction-of-arrival (DOA) associated with it, which can be used as the reference direction for sound source spatialized using this RIR, for training or evaluation purposes. The DOAs were determined acoustically from the extracted RIRs, by windowing the direct sound part and applying a broadband version of the MUSIC localization algorithm on the windowed multichannel signal. The DOAs are provided as Cartesian components [x, y, z] of unit length vectors. ## Scene generator A set of routines is shared, here termed scene generator, that can spatialize a bank of sound samples using the SRIRs and noise recordings of this library, to emulate scenes for the two target formats. The code is the same as the one used to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset, and has been ported to Python from the original version written in Matlab. The generator can be found [**here**](https://github.com/danielkrause/DCASE2022-data-generator), along with more details on its use. The generator at the moment is set to work with the [NIGENS](https://zenodo.org/record/2535878) sound event sample database, and the [FSD50K](https://zenodo.org/record/4060432) sound event database, but additional sample banks can be added with small modifications. The dataset together with the generator has been used by the authors in the following public challenges: - [DCASE 2019 Challenge Task 3](https://dcase.community/challenge2019/task-sound-event-localization-and-detection), to generate the **TAU Spatial Sound Events 2019** dataset ([development](https://doi.org/10.5281/zenodo.2599196)/[evaluation](https://doi.org/10.5281/zenodo.3377088)) - [DCASE 2020 Challenge Task 3](https://dcase.community/challenge2020/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2020**](https://doi.org/10.5281/zenodo.4064792) dataset - [DCASE2021 Challenge Task 3](https://dcase.community/challenge2021/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset - [DCASE2022 Challenge Task 3](https://dcase.community/challenge2022/task-sound-event-localization-and-detection), to generate additional [SELD synthetic mixtures for training the task baseline](https://doi.org/10.5281/zenodo.6406873) > **NOTE**: The current version of the generator is work-in-progress, with some code being quite "rough". If something does not work as intended or it is not clear what certain parts do, please contact [daniel.krause@tuni.fi](mailto:daniel.krause@tuni.fi), or [archontis.politis@tuni.fi](mailto:archontis.politis@tuni.fi). ## Dataset structure The dataset contains a folder of the SRIRs (`TAU-SRIR_DB`), with all the SRIRs per room in a single _mat_ file, e.g. `rirs_09_tb103.mat`. The specific room had 4 trajectory groups measured at 3 different heights, hence the mat file contains an `rirs` array of 4x3 structures, each with the fields `mic` and `foa`. Selecting e.g. the 2nd trajectory and 3rd height with `rirs(2,3)` returns `mic` and `foa` fields with an array of size `[7200x4x114]` on each. The array contains the SRIRs for the specific format, and it is arranged as `[samples x channels x DOAs]`, meaning that 300msec long (7200samples@24kHz) 4 channel RIRs are extracted at 114 positions along that specific trajectory. The file `rirdata.mat` contains some general information such as sample rate, format specifications, and most importantly the DOAs of every extracted SRIR. Those can be found in the `rirdata.room` field, which is an array of 9 structures itself, one per room. Checking for example `rirdata.room(8)` returns the name of the specific room (_tb103_), the year the measurements were done, the numbers of SRIRs extracted for each trajectory, and finally the DOAs of the extracted SRIRs. The DOAs of a certain trajectory can be retrieved as e.g. `rirdata.room(8).rirs(2,3).doa_xyz` which returns an array of size `[114x3]`. These are the DOAs of the 114 SRIRs retrieved in the previous step for the 2nd trajectory, 3rd source height, of room `TB103`. The file `measinfo.mat` contains measurement and recording information in each room. Those details are the name of each room, its dimensions for rectangular or trapezoidal shapes, start and end positions for the linear trajectories, or distances from center for the circular ones, the source heights for each trajectory group, the target formats, the trajectory type, the recording device, the A-weighted ambient sound pressure level, and the maximum and minimum A-weighted sound pressure level of the measurement noise signal. Coordinates are defined with respect to the origina being at the base of the microphone. Based on the information included in the `measinfo.mat`, one can plot a 3D arrangement of the trajectories around the microphone, even though keep in mind that these would be the ideal circular or linear intended trajectories, while the actual DOAs obtained from acoustic analysis have some deviations around those ideal paths. Finally, the dataset contains a folder of spatial ambient noise recordings (`TAU-SNoise_DB`), with one subfolder per room having two audio recordings fo the spatial ambience, one for each format, FOA or MIC. The recordings vary in length between rooms, ranging from about 20 mins to 30 mins. Users of the dataset can segment these recordings and add them to spatialized sound samples at desired SNRs, or mix different segments to augment the recordings to additional ambience than the original recording time. Such a use case is demonstrated in the scene generator examples. ## Download The files `TAU-SRIR_DB.z01`, ..., `TAU-SRIR_DB.zip` contain the SRIRs and measurement info files. The files `TAU-SNoise_DB.z01`, ..., `TAU-SNoise_DB.zip` contain the ambient noise recordings. Download the zip files and use your preferred compression tool to unzip these split zip files. To extract a split zip archive (named as zip, z01, z02, ...), you could use, for example, the following syntax in Linux or OSX terminal: Combine the split archive to a single archive: >zip -s 0 split.zip --out single.zip Extract the single archive using unzip: >unzip single.zip # License The database is published under a custom **open non-commercial with attribution** license. It can be found in the `LICENSE.txt` file that accompanies the data.
NbAiLab
null
null
null
false
2
false
NbAiLab/old_hesitate
2022-05-10T09:35:45.000Z
null
false
5d6ff96175083092f0c6c834dab88bfb3f8f6710
[]
[]
https://huggingface.co/datasets/NbAiLab/old_hesitate/resolve/main/README.md
pietrolesci
null
null
null
false
1
false
pietrolesci/dialogue_nli
2022-04-25T08:39:10.000Z
null
false
f23677a6713b1558fe0e6ba3ec8db76ec8e49e98
[]
[]
https://huggingface.co/datasets/pietrolesci/dialogue_nli/resolve/main/README.md
## Overview Original dataset available [here](https://wellecks.github.io/dialogue_nli/). ## Dataset curation Original `label` column is renamed `original_label`. The original classes are renamed as follows ``` {"positive": "entailment", "negative": "contradiction", "neutral": "neutral"}) ``` and encoded with the following mapping ``` {"entailment": 0, "neutral": 1, "contradiction": 2} ``` and stored in the newly created column `label`. The following splits and the corresponding columns are present in the original files ``` train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'} dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'} test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'} verified_test {'dtype', 'annotation3', 'sentence1', 'sentence2', 'annotation1', 'annotation2', 'original_label', 'label', 'triple2', 'triple1'} extra_test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'} extra_dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'} extra_train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'} valid_havenot {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'} valid_attributes {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'} valid_likedislike {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'} ``` Note that I only keep the common columns, which means that I drop "annotation{1, 2, 3}" from `verified_test`. Note that there are some splits with the same instances, as found by matching on "original_label", "sentence1", "sentence2". ## Code to create dataset ```python import pandas as pd from pathlib import Path import json from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, Sequence # load data ds = {} for path in Path(".").rglob("<path to folder>/*.jsonl"): print(path, flush=True) with path.open("r") as fl: data = fl.read() try: d = json.loads(data, encoding="utf-8") except json.JSONDecodeError as error: print(error) df = pd.DataFrame(d) # encode labels df["original_label"] = df["label"] df["label"] = df["label"].map({"positive": "entailment", "negative": "contradiction", "neutral": "neutral"}) df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) ds[path.name.split(".")[0]] = df # prettify names of data splits datasets = { k.replace("dialogue_nli_", "").replace("uu_", "").lower(): v for k, v in ds.items() } datasets.keys() #> dict_keys(['train', 'dev', 'test', 'verified_test', 'extra_test', 'extra_dev', 'extra_train', 'valid_havenot', 'valid_attributes', 'valid_likedislike']) # cast to datasets using only common columns features = Features({ "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), "sentence1": Value(dtype="string", id=None), "sentence2": Value(dtype="string", id=None), "triple1": Sequence(feature=Value(dtype="string", id=None), length=3), "triple2": Sequence(feature=Value(dtype="string", id=None), length=3), "dtype": Value(dtype="string", id=None), "id": Value(dtype="string", id=None), "original_label": Value(dtype="string", id=None), }) ds = {} for name, df in datasets.items(): if "id" not in df.columns: df["id"] = "" ds[name] = Dataset.from_pandas(df.loc[:, list(features.keys())], features=features) ds = DatasetDict(ds) ds.push_to_hub("dialogue_nli", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["original_label", "sentence1", "sentence2"], how="inner", ).shape[0], ) #> train - dev: 58 #> train - test: 98 #> train - verified_test: 90 #> train - extra_test: 0 #> train - extra_dev: 0 #> train - extra_train: 0 #> train - valid_havenot: 0 #> train - valid_attributes: 0 #> train - valid_likedislike: 0 #> dev - test: 19 #> dev - verified_test: 19 #> dev - extra_test: 0 #> dev - extra_dev: 75 #> dev - extra_train: 75 #> dev - valid_havenot: 75 #> dev - valid_attributes: 75 #> dev - valid_likedislike: 75 #> test - verified_test: 12524 #> test - extra_test: 34 #> test - extra_dev: 0 #> test - extra_train: 0 #> test - valid_havenot: 0 #> test - valid_attributes: 0 #> test - valid_likedislike: 0 #> verified_test - extra_test: 29 #> verified_test - extra_dev: 0 #> verified_test - extra_train: 0 #> verified_test - valid_havenot: 0 #> verified_test - valid_attributes: 0 #> verified_test - valid_likedislike: 0 #> extra_test - extra_dev: 0 #> extra_test - extra_train: 0 #> extra_test - valid_havenot: 0 #> extra_test - valid_attributes: 0 #> extra_test - valid_likedislike: 0 #> extra_dev - extra_train: 250946 #> extra_dev - valid_havenot: 250946 #> extra_dev - valid_attributes: 250946 #> extra_dev - valid_likedislike: 250946 #> extra_train - valid_havenot: 250946 #> extra_train - valid_attributes: 250946 #> extra_train - valid_likedislike: 250946 #> valid_havenot - valid_attributes: 250946 #> valid_havenot - valid_likedislike: 250946 #> valid_attributes - valid_likedislike: 250946 ```
pietrolesci
null
null
null
false
2
false
pietrolesci/dnc
2022-04-25T08:59:06.000Z
null
false
1b34f1c8b073c6782b68dc3c5c10ef6356a284d3
[]
[]
https://huggingface.co/datasets/pietrolesci/dnc/resolve/main/README.md
## Overview Original dataset [here](https://github.com/decompositional-semantics-initiative/DNC). This dataset has been proposed in [Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation](https://www.aclweb.org/anthology/D18-1007/). ## Dataset curation This version of the dataset does not include the `type-of-inference` "KG" as its label set is `[1, 2, 3, 4, 5]` while here we focus on NLI-related label sets, i.e. `[entailed, not-entailed]`. For this reason, I named the dataset DNLI for _Diverse_ NLI, as in [Liu et al 2020](https://aclanthology.org/2020.conll-1.48/), instead of DNC. This version of the dataset contains columns from the `*_data.json` and the `*_metadata.json` files available in the repo. In the original repo, each data file has the following keys and values: - `context`: The context sentence for the NLI pair. The context is already tokenized. - `hypothesis`: The hypothesis sentence for the NLI pair. The hypothesis is already tokenized. - `label`: The label for the NLI pair - `label-set`: The set of possible labels for the specific NLI pair - `binary-label`: A `True` or `False` label. See the paper for details on how we convert the `label` into a binary label. - `split`: This can be `train`, `dev`, or `test`. - `type-of-inference`: A string indicating what type of inference is tested in this example. - `pair-id`: A unique integer id for the NLI pair. The `pair-id` is used to find the corresponding metadata for any given NLI pair while each metadata file has the following columns - `pair-id`: A unique integer id for the NLI pair. - `corpus`: The original corpus where this example came from. - `corpus-sent-id`: The id of the sentence (or example) in the original dataset that we recast. - `corpus-license`: The license for the data from the original dataset. - `creation-approach`: Determines the method used to recast this example. Options are `automatic`, `manual`, or `human-labeled`. - `misc`: A dictionary of other relevant information. This is an optional field. The files are merged on the `pair-id` key. I **do not** include the `misc` column as it is not essential for NLI. NOTE: the label mapping is **not** the custom (i.e., 3 class) for NLI tasks. They used a binary target and I encoded them with the following mapping `{"not-entailed": 0, "entailed": 1}`. NOTE: some instances are present in multiple splits (matching performed by exact matching on "context", "hypothesis", and "label"). ## Code to create the dataset ```python import pandas as pd from datasets import Dataset, ClassLabel, Value, Features, DatasetDict, Sequence from pathlib import Path paths = { "train": "<path_to_folder>/DNC-master/train", "dev": "<path_to_folder>/DNC-master/dev", "test": "<path_to_folder>/DNC-master/test", } # read all data files dfs = [] for split, path in paths.items(): for f_name in Path(path).rglob("*_data.json"): df = pd.read_json(str(f_name)) df["file_split_data"] = split dfs.append(df) data = pd.concat(dfs, ignore_index=False, axis=0) # read all metadata files meta_dfs = [] for split, path in paths.items(): for f_name in Path(path).rglob("*_metadata.json"): df = pd.read_json(str(f_name)) meta_dfs.append(df) metadata = pd.concat(meta_dfs, ignore_index=False, axis=0) # merge dataset = pd.merge(data, metadata, on="pair-id", how="left") # check that the split column reflects file splits assert sum(dataset["split"] != dataset["file_split_data"]) == 0 dataset = dataset.drop(columns=["file_split_data"]) # fix `binary-label` column dataset.loc[~dataset["label"].isin(["entailed", "not-entailed"]), "binary-label"] = False dataset.loc[dataset["label"].isin(["entailed", "not-entailed"]), "binary-label"] = True # fix datatype dataset["corpus-sent-id"] = dataset["corpus-sent-id"].astype(str) # order columns as shown in the README.md columns = [ "context", "hypothesis", "label", "label-set", "binary-label", "split", "type-of-inference", "pair-id", "corpus", "corpus-sent-id", "corpus-license", "creation-approach", "misc", ] dataset = dataset.loc[:, columns] # remove misc column dataset = dataset.drop(columns=["misc"]) # remove KG for NLI dataset.loc[(dataset["label"].isin([1, 2, 3, 4, 5])), "type-of-inference"].value_counts() # > the only split with label-set [1, 2, 3, 4, 5], so remove as we focus on NLI dataset = dataset.loc[~(dataset["type-of-inference"] == "KG")] # encode labels dataset["label"] = dataset["label"].map({"not-entailed": 0, "entailed": 1}) # fill NA in label-set dataset["label-set"] = dataset["label-set"].ffill() features = Features( { "context": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]), "label-set": Sequence(length=2, feature=Value(dtype="string")), "binary-label": Value(dtype="bool"), "split": Value(dtype="string"), "type-of-inference": Value(dtype="string"), "pair-id": Value(dtype="int64"), "corpus": Value(dtype="string"), "corpus-sent-id": Value(dtype="string"), "corpus-license": Value(dtype="string"), "creation-approach": Value(dtype="string"), } ) dataset_splits = {} for split in ("train", "dev", "test"): df_split = dataset.loc[dataset["split"] == split] dataset_splits[split] = Dataset.from_pandas(df_split, features=features) dataset_splits = DatasetDict(dataset_splits) dataset_splits.push_to_hub("pietrolesci/dnli", token="<your token>") # check overlap between splits from itertools import combinations for i, j in combinations(dataset_splits.keys(), 2): print( f"{i} - {j}: ", pd.merge( dataset_splits[i].to_pandas(), dataset_splits[j].to_pandas(), on=["context", "hypothesis", "label"], how="inner", ).shape[0], ) #> train - dev: 127 #> train - test: 55 #> dev - test: 54 ```
jordane95
null
@misc{bajaj2018ms, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song and Alina Stoica and Saurabh Tiwary and Tong Wang}, year={2018}, eprint={1611.09268}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
false
3
false
jordane95/msmarco-passage-query
2022-04-25T09:51:26.000Z
null
false
bf371176ef4483c941b8aebc6403dcb6c33368f0
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/jordane95/msmarco-passage-query/resolve/main/README.md
--- license: afl-3.0 ---
jordane95
null
@misc{bajaj2018ms, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song and Alina Stoica and Saurabh Tiwary and Tong Wang}, year={2018}, eprint={1611.09268}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
false
2
false
jordane95/msmarco-passage-query-corpus
2022-04-25T09:48:38.000Z
null
false
40dc85993a2af7b0a78a0daa32beb174cf000e84
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/jordane95/msmarco-passage-query-corpus/resolve/main/README.md
--- license: afl-3.0 ---
Meena
null
null
null
false
2
false
Meena/imdb_ratings_table
2022-04-25T09:25:49.000Z
null
false
bbf6138e30cff48af0b9fa46ed710f68400dde85
[]
[]
https://huggingface.co/datasets/Meena/imdb_ratings_table/resolve/main/README.md
This dataset contains IMDB Ratings of various movies of different languages. This dataset also contains the number of votes each movies received
pietrolesci
null
null
null
false
3
false
pietrolesci/stress_tests_nli
2022-04-25T09:32:28.000Z
null
false
fbd6fcc5c3b8dc79ad26eaced52d7f04c6fea6d7
[]
[]
https://huggingface.co/datasets/pietrolesci/stress_tests_nli/resolve/main/README.md
## Overview Original dataset page [here](https://abhilasharavichander.github.io/NLI_StressTest/) and dataset available [here](https://drive.google.com/open?id=1faGA5pHdu5Co8rFhnXn-6jbBYC2R1dhw). ## Dataset curation Added new column `label` with encoded labels with the following mapping ``` {"entailment": 0, "neutral": 1, "contradiction": 2} ``` and the columns with parse information are dropped as they are not well formatted. Also, the name of the file from which each instance comes is added in the column `dtype`. ## Code to create the dataset ```python import pandas as pd from datasets import Dataset, ClassLabel, Value, Features, DatasetDict import json from pathlib import Path # load data ds = {} path = Path("<path to folder>") for i in path.rglob("*.jsonl"): print(i) name = str(i).split("/")[0].lower() dtype = str(i).split("/")[1].lower() # read data with i.open("r") as fl: df = pd.DataFrame([json.loads(line) for line in fl]) # select columns df = df.loc[:, ["sentence1", "sentence2", "gold_label"]] # add file name as column df["dtype"] = dtype # encode labels df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) ds[name] = df # cast to dataset features = Features( { "sentence1": Value(dtype="string"), "sentence2": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), "dtype": Value(dtype="string"), "gold_label": Value(dtype="string"), } ) ds = DatasetDict({k: Dataset.from_pandas(v, features=features) for k, v in ds.items()}) ds.push_to_hub("pietrolesci/stress_tests_nli", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["sentence1", "sentence2", "label"], how="inner", ).shape[0], ) #> numerical_reasoning - negation: 0 #> numerical_reasoning - length_mismatch: 0 #> numerical_reasoning - spelling_error: 0 #> numerical_reasoning - word_overlap: 0 #> numerical_reasoning - antonym: 0 #> negation - length_mismatch: 0 #> negation - spelling_error: 0 #> negation - word_overlap: 0 #> negation - antonym: 0 #> length_mismatch - spelling_error: 0 #> length_mismatch - word_overlap: 0 #> length_mismatch - antonym: 0 #> spelling_error - word_overlap: 0 #> spelling_error - antonym: 0 #> word_overlap - antonym: 0 ```
pietrolesci
null
null
null
false
1
false
pietrolesci/gen_debiased_nli
2022-04-25T09:49:52.000Z
null
false
8526f3a347c2d5760dc79a3dbe88134cc89c36b9
[]
[]
https://huggingface.co/datasets/pietrolesci/gen_debiased_nli/resolve/main/README.md
## Overview Original dataset available [here](https://github.com/jimmycode/gen-debiased-nli#training-with-our-datasets). ```latex @inproceedings{gen-debiased-nli-2022, title = "Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets", author = "Wu, Yuxiang and Gardner, Matt and Stenetorp, Pontus and Dasigi, Pradeep", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics", month = may, year = "2022", publisher = "Association for Computational Linguistics", } ``` ## Dataset curation No curation. ## Code to create the dataset ```python import pandas as pd from datasets import Dataset, ClassLabel, Value, Features, DatasetDict import json from pathlib import Path # load data path = Path("./") ds = {} for i in path.rglob("*.jsonl"): print(i) name = str(i).split(".")[0].lower().replace("-", "_") with i.open("r") as fl: df = pd.DataFrame([json.loads(line) for line in fl]) ds[name] = df # cast to dataset features = Features( { "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), "type": Value(dtype="string"), } ) ds = DatasetDict({k: Dataset.from_pandas(v, features=features) for k, v in ds.items()}) ds.push_to_hub("pietrolesci/gen_debiased_nli", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["premise", "hypothesis", "label"], how="inner", ).shape[0], ) #> mnli_seq_z - snli_z_aug: 0 #> mnli_seq_z - mnli_par_z: 477149 #> mnli_seq_z - snli_seq_z: 0 #> mnli_seq_z - mnli_z_aug: 333840 #> mnli_seq_z - snli_par_z: 0 #> snli_z_aug - mnli_par_z: 0 #> snli_z_aug - snli_seq_z: 506624 #> snli_z_aug - mnli_z_aug: 0 #> snli_z_aug - snli_par_z: 504910 #> mnli_par_z - snli_seq_z: 0 #> mnli_par_z - mnli_z_aug: 334960 #> mnli_par_z - snli_par_z: 0 #> snli_seq_z - mnli_z_aug: 0 #> snli_seq_z - snli_par_z: 583107 #> mnli_z_aug - snli_par_z: 0 ```
pietrolesci
null
null
null
false
3
false
pietrolesci/gpt3_nli
2022-04-25T10:17:45.000Z
null
false
48d27a285f1919f3f7e6cd53b6a07fb13a238efb
[]
[]
https://huggingface.co/datasets/pietrolesci/gpt3_nli/resolve/main/README.md
## Overview Original dataset available [here](https://github.com/krandiash/gpt3-nli). Debiased dataset generated with GPT-3. ## Dataset curation All string columns are stripped. Labels are encoded with the following mapping ``` {"entailment": 0, "neutral": 1, "contradiction": 2} ``` ## Code to create the dataset ```python import pandas as pd from datasets import Dataset, ClassLabel, Value, Features import json # load data with open("data/dataset.jsonl", "r") as fl: df = pd.DataFrame([json.loads(line) for line in fl]) df.columns = df.columns.str.strip() # fix dtypes df["guid"] = df["guid"].astype(int) for col in df.select_dtypes(object): df[col] = df[col].str.strip() # encode labels df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features( { "text_a": Value(dtype="string"), "text_b": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), "guid": Value(dtype="int64"), } ) ds = Dataset.from_pandas(df, features=features) ds.push_to_hub("pietrolesci/gpt3_nli", token="<token>") ```
BritishLibraryLabs
null
TODO
The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy.
false
3
false
BritishLibraryLabs/web_archive_classification
2022-10-25T10:12:01.000Z
null
false
9bcbd688f444afa216b24d96dad67521e401e842
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "license:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-labe...
https://huggingface.co/datasets/BritishLibraryLabs/web_archive_classification/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - other multilinguality: - monolingual pretty_name: UK Selective Web Archive Classification Dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification - multi-label-classification --- # Dataset Card for UK Selective Web Archive Classification Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. In partnership with the Internet Archive and JISC, UKWA had obtained access to the subset of the Internet Archives web collection that relates to the UK. The JISC UK Web Domain Dataset (1996 - 2013) contains all of the resources from the Internet Archive that were hosted on domains ending in .uk, or that are required in order to render those UK pages. UKWA have made this manually-generated classification information available as an open dataset in Tab Separated Values (TSV) format. UKWA is particularly interested in whether high-level metadata like this can be used to train an appropriate automatic classification system so that this manually generated dataset may be used to partially automate the categorisation of the UKWAs larger archives. UKWA expects that an appropriate classifier might require more information about each site in order to produce reliable results, and a future goal is to augment this dataset with further information. Options include: for each site, making the titles of every page on that site available, and for each site, extract a set of keywords that summarise the site, via the full-text index. For more information: http://data.webarchive.org.uk/opendata/ukwa.ds.1/classification/ ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons Public Domain Mark 1.0. ### Citation Information [Needs More Information]
pietrolesci
null
null
null
false
3
false
pietrolesci/robust_nli
2022-04-25T11:45:07.000Z
null
false
429dde22805398bdd6cfece27284f53a44ed6e67
[]
[]
https://huggingface.co/datasets/pietrolesci/robust_nli/resolve/main/README.md
## Overview Original dataset is available in the original [Github repo](https://github.com/tyliupku/nli-debiasing-datasets). This dataset is a collection of NLI benchmarks constructed as described in the paper [An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference](https://aclanthology.org/2020.conll-1.48/) published at CoNLL 2020. ## Dataset curation No specific curation for this dataset. Label encoding follows exactly what is reported in the paper by the authors. Also, from the paper: > _all the following datasets are collected based on the public available resources proposed by their authors, thus the experimental results in this paper are comparable to the numbers reported in the original papers and the other papers that use these datasets_ Most of the datasets included follow the custom 3-class NLI convention `{"entailment": 0, "neutral": 1, "contradiction": 2}`. However, the following datasets have a particular label mapping - `IS-SD`: `{"non-entailment": 0, "entailment": 1}` - `LI_TS`: `{"non-contradiction": 0, "contradiction": 1}` ## Dataset structure This benchmark dataset includes 10 adversarial datasets. To provide more insights on how the adversarial datasets attack the models, the authors categorized them according to the bias(es) they test and they renamed them accordingly. More details in section 2 of the paper. A mapping with the original dataset names is provided below | | Name | Original Name | Original Paper | Original Curation | |---:|:-------|:-----------------------|:--------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | PI-CD | SNLI-Hard | [Gururangan et al. (2018)](https://aclanthology.org/N18-2017/) | SNLI test sets instances that cannot be correctly classified by a neural classifier (fastText) trained on only the hypothesis sentences. | | 1 | PI-SP | MNLI-Hard | [Liu et al. (2020)](https://aclanthology.org/2020.lrec-1.846/) | MNLI-mismatched dev sets instances that cannot be correctly classified by surface patterns that are highly correlated with the labels. | | 2 | IS-SD | HANS | [McCoy et al. (2019)](https://aclanthology.org/P19-1334/) | Dataset that tests lexical overlap, subsequence, and constituent heuristics between the hypothesis and premises sentences. | | 3 | IS-CS | SoSwap-AddAMod | [Nie et al. (2019)](https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016867) | Pairs of sentences whose logical relations cannot be extracted from lexical information alone. Premise are taken from SNLI dev set and modified. The original paper assigns a Lexically Misleading Scores (LMS) to each instance. Here, only the subset with LMS > 0.7 is reported. | | 4 | LI-LI | Stress tests (antonym) | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) and [Glockner et al. (2018)](https://aclanthology.org/P18-2103/) | Merge of the 'antonym' category in Naik et al. (2018) (from MNLI matched and mismatched dev sets) and Glockner et al. (2018) (SNLI training set). | | 5 | LI-TS | Created by the authors | Created by the authors | Swap the two sentences in the original MultiNLI mismatched dev sets. If the gold label is 'contradiction', the corresponding label in the swapped instance remains unchanged, otherwise it becomes 'non-contradicted'. | | 6 | ST-WO | Word overlap | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Word overlap' category in Naik et al. (2018). | | 7 | ST-NE | Negation | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Negation' category in Naik et al. (2018). | | 8 | ST-LM | Length mismatch | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Length mismatch' category in Naik et al. (2018). | | 9 | ST-SE | Spelling errors | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Spelling errors' category in Naik et al. (2018). | ## Code to create the dataset ```python import pandas as pd from datasets import Dataset, ClassLabel, Value, Features, DatasetDict Tri_dataset = ["IS_CS", "LI_LI", "PI_CD", "PI_SP", "ST_LM", "ST_NE", "ST_SE", "ST_WO"] Ent_bin_dataset = ["IS_SD"] Con_bin_dataset = ["LI_TS"] # read data with open("<path to file>/robust_nli.txt", encoding="utf-8", mode="r") as fl: f = fl.read().strip().split("\n") f = [eval(i) for i in f] df = pd.DataFrame.from_dict(f) # rename to map common names df = df.rename(columns={"prem": "premise", "hypo": "hypothesis"}) # reorder columns df = df.loc[:, ["idx", "split", "premise", "hypothesis", "label"]] # create split-specific features Tri_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), } ) Ent_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]), } ) Con_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]), } ) # convert to datasets dataset_splits = {} for split in df["split"].unique(): print(split) df_split = df.loc[df["split"] == split].copy() if split in Tri_dataset: df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) ds = Dataset.from_pandas(df_split, features=Tri_features) elif split in Ent_bin_dataset: df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1}) ds = Dataset.from_pandas(df_split, features=Ent_features) elif split in Con_bin_dataset: df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1}) ds = Dataset.from_pandas(df_split, features=Con_features) else: print("ERROR:", split) dataset_splits[split] = ds datasets = DatasetDict(dataset_splits) datasets.push_to_hub("pietrolesci/robust_nli", token="<your token>") # check overlap between splits from itertools import combinations for i, j in combinations(datasets.keys(), 2): print( f"{i} - {j}: ", pd.merge( datasets[i].to_pandas(), datasets[j].to_pandas(), on=["premise", "hypothesis", "label"], how="inner", ).shape[0], ) #> PI_SP - ST_LM: 0 #> PI_SP - ST_NE: 0 #> PI_SP - IS_CS: 0 #> PI_SP - LI_TS: 1 #> PI_SP - LI_LI: 0 #> PI_SP - ST_SE: 0 #> PI_SP - PI_CD: 0 #> PI_SP - IS_SD: 0 #> PI_SP - ST_WO: 0 #> ST_LM - ST_NE: 0 #> ST_LM - IS_CS: 0 #> ST_LM - LI_TS: 0 #> ST_LM - LI_LI: 0 #> ST_LM - ST_SE: 0 #> ST_LM - PI_CD: 0 #> ST_LM - IS_SD: 0 #> ST_LM - ST_WO: 0 #> ST_NE - IS_CS: 0 #> ST_NE - LI_TS: 0 #> ST_NE - LI_LI: 0 #> ST_NE - ST_SE: 0 #> ST_NE - PI_CD: 0 #> ST_NE - IS_SD: 0 #> ST_NE - ST_WO: 0 #> IS_CS - LI_TS: 0 #> IS_CS - LI_LI: 0 #> IS_CS - ST_SE: 0 #> IS_CS - PI_CD: 0 #> IS_CS - IS_SD: 0 #> IS_CS - ST_WO: 0 #> LI_TS - LI_LI: 0 #> LI_TS - ST_SE: 0 #> LI_TS - PI_CD: 0 #> LI_TS - IS_SD: 0 #> LI_TS - ST_WO: 0 #> LI_LI - ST_SE: 0 #> LI_LI - PI_CD: 0 #> LI_LI - IS_SD: 0 #> LI_LI - ST_WO: 0 #> ST_SE - PI_CD: 0 #> ST_SE - IS_SD: 0 #> ST_SE - ST_WO: 0 #> PI_CD - IS_SD: 0 #> PI_CD - ST_WO: 0 #> IS_SD - ST_WO: 0 ```
pietrolesci
null
null
null
false
3
false
pietrolesci/robust_nli_li_ts
2022-04-25T11:49:51.000Z
null
false
8ede2d7bf4531a7b210c793fe7b9e483b871c8f5
[]
[]
https://huggingface.co/datasets/pietrolesci/robust_nli_li_ts/resolve/main/README.md
This is part of `robust_NLI`but since there seems to be a bug when loading and downloading `DatasetDict` containing datasets with different configurations, I loaded the datasets with the differing configs as standalone datasets. Issue here: [https://github.com/huggingface/datasets/issues/4211](https://github.com/huggingface/datasets/issues/4211)
pietrolesci
null
null
null
false
2
false
pietrolesci/robust_nli_is_sd
2022-04-25T13:07:25.000Z
null
false
338d9797bb910381f7493343991c1055d425b9c4
[]
[]
https://huggingface.co/datasets/pietrolesci/robust_nli_is_sd/resolve/main/README.md
This is part of `robust_NLI`but since there seems to be a bug when loading and downloading `DatasetDict` containing datasets with different configurations, I loaded the datasets with the differing configs as standalone datasets. Issue here: [https://github.com/huggingface/datasets/issues/4211](https://github.com/huggingface/datasets/issues/4211)
jamescalam
null
null
null
false
65
false
jamescalam/reddit-python
2022-04-25T12:41:35.000Z
null
false
c47716065f1f2076c39c806dd7007027342da502
[]
[]
https://huggingface.co/datasets/jamescalam/reddit-python/resolve/main/README.md
# Python Subreddit Dataset containing data scraped from the [Python subreddit](https://www.reddit.com/r/python).
loubnabnl
null
null
null
false
3
false
loubnabnl/tokenized-github-code-python
2022-04-28T00:13:55.000Z
null
false
2e7504f0d4a70d6bf0373a39767ecd2f85ae0d9f
[]
[]
https://huggingface.co/datasets/loubnabnl/tokenized-github-code-python/resolve/main/README.md
# Pretokenized GitHub Code Dataset ## Dataset Description This is a pretokenized version of the Python files of the [GitHub Code dataset](https://huggingface.co/datasets/lvwerra/github-code), that consists of 115M code files from GitHub in 32 programming languages. We tokenized the dataset using BPE Tokenizer trained on code, available in this [repo](https://huggingface.co/lvwerra/codeparrot). Having a pretokenized dataset can speed up the training loop by not having to tokenize data at each batch call. We also include `ratio_char_token` which gives the ratio between the number of characters in a file and the number of tokens we get after tokenization, this ratio can be a good filter to detect outlier files. ### How to use it To avoid downloading the whole dataset, you can make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code: ```python from datasets import load_dataset ds = load_dataset("loubnabnl/tokenized-github-code-python", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: {'input_ids': [504, 1639, 492,...,199, 504, 1639], 'ratio_char_token': 3.560888252148997 } ```
pietrolesci
null
null
null
false
4
false
pietrolesci/joci
2022-04-25T13:33:08.000Z
null
false
8371e5cf43c3564daa1314ecf6086b58fcbf2178
[]
[]
https://huggingface.co/datasets/pietrolesci/joci/resolve/main/README.md
## Overview Original dataset available [here](https://github.com/sheng-z/JOCI/tree/master/data). This dataset is the "full" JOCI dataset, which is the file named `joci.csv.zip`. # Dataset curation The following processing is applied, - `label` column renamed to `original_label` - creation of the `label` column using the following mapping, using common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/joci.py#L22-L27), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_joci.py#L7-L12)) ``` { 0: "contradiction", 1: "contradiction", 2: "neutral", 3: "neutral", 4: "neutral", 5: "entailment", } ``` - finally, converting this to the usual NLI classes, that is `{"entailment": 0, "neutral": 1, "contradiction": 2}` ## Code to create dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset # read data df = pd.read_csv("<path to folder>/joci.csv") # column name to lower df.columns = df.columns.str.lower() # rename label column df = df.rename(columns={"label": "original_label"}) # encode labels df["label"] = df["original_label"].map({ 0: "contradiction", 1: "contradiction", 2: "neutral", 3: "neutral", 4: "neutral", 5: "entailment", }) # encode labels df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features({ "context": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), "original_label": Value(dtype="int32"), "context_from": Value(dtype="string"), "hypothesis_from": Value(dtype="string"), "subset": Value(dtype="string"), }) ds = Dataset.from_pandas(df, features=features) ds.push_to_hub("joci", token="<token>") ```
pietrolesci
null
null
null
false
2
false
pietrolesci/breaking_nli
2022-04-25T13:37:23.000Z
null
false
82b6583887562130331c99bba2c994b44eae310f
[]
[]
https://huggingface.co/datasets/pietrolesci/breaking_nli/resolve/main/README.md
## Overview Proposed by ```latex @InProceedings{glockner_acl18, author = {Glockner, Max and Shwartz, Vered and Goldberg, Yoav}, title = {Breaking NLI Systems with Sentences that Require Simple Lexical Inferences}, booktitle = {The 56th Annual Meeting of the Association for Computational Linguistics (ACL)}, month = {July}, year = {2018}, address = {Melbourne, Australia} } ``` Original dataset available [here](https://github.com/BIU-NLP/Breaking_NLI). ## Dataset curation Labels encoded with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}` and made available in the `label` column. ## Code to create the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, Sequence # load data with open("<path to folder>/dataset.jsonl", "r") as fl: data = fl.read().split("\n") df = pd.DataFrame([eval(i) for i in data if len(i) > 0]) # encode labels df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features({ "sentence1": Value(dtype="string", id=None), "category": Value(dtype="string", id=None), "gold_label": Value(dtype="string", id=None), "annotator_labels": Sequence(feature=Value(dtype="string", id=None), length=3), "pairID": Value(dtype="int32", id=None), "sentence2": Value(dtype="string", id=None), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), }) ds = Dataset.from_pandas(df, features=features) ds.push_to_hub("breaking_nli", token="<token>", split="all") ```
pietrolesci
null
null
null
false
3
false
pietrolesci/copa_nli
2022-04-25T13:47:10.000Z
null
false
e39c4231c5c09a3ee1d3fd9e9bdfab466a6254f6
[]
[]
https://huggingface.co/datasets/pietrolesci/copa_nli/resolve/main/README.md
## Overview Original dataset available [here](https://people.ict.usc.edu/~gordon/copa.html). Current dataset extracted from [this repo](https://github.com/felipessalvatore/NLI_datasets). This is the "full" dataset. # Curation Same curation as the one applied in [this repo](https://github.com/felipessalvatore/NLI_datasets), that is from the original COPA format: |premise | choice1 | choice2 | label | |---|---|---|---| |My body cast a shadow over the grass | The sun was rising | The grass was cut | 0 | to the NLI format: | premise | hypothesis | label | |---|---|---| | My body cast a shadow over the grass | The sun was rising| entailment | | My body cast a shadow over the grass | The grass was cut | not_entailment | Also, the labels are encoded with the following mapping `{"not_entailment": 0, "entailment": 1}` ## Code to generate dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset from pathlib import Path # read data path = Path("./nli_datasets") datasets = {} for dataset_path in path.iterdir(): datasets[dataset_path.name] = {} for name in dataset_path.iterdir(): df = pd.read_csv(name) datasets[dataset_path.name][name.name.split(".")[0]] = df # merge all splits df = pd.concat(list(datasets["copa"].values())) # encode labels df["label"] = df["label"].map({"not_entailment": 0, "entailment": 1}) # cast to dataset features = Features({ "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "label": ClassLabel(num_classes=2, names=["not_entailment", "entailment"]), }) ds = Dataset.from_pandas(df, features=features) ds.push_to_hub("copa_nli", token="<token>") ```
cfilt
null
This is the dataset repository for HiNER Dataset accepted to be published at LREC 2022. The dataset can help build sequence labelling models for the task Named Entity Recognitin for the Hindi language.
false
83
false
cfilt/HiNER-original
2022-07-30T12:26:20.000Z
hiner-original-1
false
f37d0b967d5016180d0948d991708d80834ba5b1
[]
[ "arxiv:2204.13743", "annotations_creators:expert-generated", "language_creators:expert-generated", "language:hi", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "task_categories:token-classification", "task_ids:named-entity-recogniti...
https://huggingface.co/datasets/cfilt/HiNER-original/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - hi license: "cc-by-sa-4.0" multilinguality: - monolingual paperswithcode_id: hiner-original-1 pretty_name: HiNER - Large Hindi Named Entity Recognition dataset size_categories: - 100K<n<1M source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition --- <p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p> # Dataset Card for HiNER-original [![Twitter Follow](https://img.shields.io/twitter/follow/cfiltnlp?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/cfiltnlp) [![Twitter Follow](https://img.shields.io/twitter/follow/PeopleCentredAI?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/PeopleCentredAI) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/cfiltnlp/HiNER - **Repository:** https://github.com/cfiltnlp/HiNER - **Paper:** https://arxiv.org/abs/2204.13743 - **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-on-hiner-original - **Point of Contact:** Rudra Murthy V ### Dataset Summary This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy. **Note:** The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset. ### Supported Tasks and Leaderboards Named Entity Recognition ### Languages Hindi ## Dataset Structure ### Data Instances {'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग','के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]} ### Data Fields - `id`: The ID value of the data point. - `tokens`: Raw tokens in the dataset. - `ner_tags`: the NER tags for this dataset. ### Data Splits | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | original | 76025 | 10861 | 21722| | collapsed | 76025 | 10861 | 21722| ## About This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available [here](https://arxiv.org/abs/2204.13743). ### Recent Updates * Version 0.0.5: HiNER initial release ## Usage You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip: ```code pip install datasets ``` To use the original dataset with all the tags, please use:<br/> ```python from datasets import load_dataset hiner = load_dataset('cfilt/HiNER-original') ``` To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/> ```python from datasets import load_dataset hiner = load_dataset('cfilt/HiNER-collapsed') ``` However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder. ## Model(s) Our best performing models are hosted on the HuggingFace models repository: 1. [HiNER-Collapsed-XLM-R](https://huggingface.co/cfilt/HiNER-Collapse-XLM-Roberta-Large) 2. [HiNER-Original-XLM-R](https://huggingface.co/cfilt/HiNER-Original-XLM-Roberta-Large) ## Dataset Creation ### Curation Rationale HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing. ### Source Data #### Initial Data Collection and Normalization HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi #### Who are the source language producers? Various Government of India webpages ### Annotations #### Annotation process This dataset was manually annotated by a single annotator of a long span of time. #### Who are the annotators? Pallab Bhattacharjee ### Personal and Sensitive Information We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data. ### Discussion of Biases Any biases contained in the data released by the Indian government are bound to be present in our data. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Pallab Bhattacharjee ### Licensing Information CC-BY-SA 4.0 ### Citation Information ```latex @misc{https://doi.org/10.48550/arxiv.2204.13743, doi = {10.48550/ARXIV.2204.13743}, url = {https://arxiv.org/abs/2204.13743}, author = {Murthy, Rudra and Bhattacharjee, Pallab and Sharnagat, Rahul and Khatri, Jyotsana and Kanojia, Diptesh and Bhattacharyya, Pushpak}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {HiNER: A Large Hindi Named Entity Recognition Dataset}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
xap
null
null
null
false
2
false
xap/testing
2022-05-03T18:20:07.000Z
null
false
6cdfae511d10a6fc82a9ed35684b04041e97683d
[]
[ "license:cc" ]
https://huggingface.co/datasets/xap/testing/resolve/main/README.md
--- license: cc ---
rosimeirecosta
null
false
1
false
rosimeirecosta/c_corpus
2022-04-25T20:03:08.000Z
null
false
f88b0c931a28aac0824a988e60b76e5a83fd0da3
[]
[]
https://huggingface.co/datasets/rosimeirecosta/c_corpus/resolve/main/README.md
annotations_creators: - annotation languages: - pt-br multilinguality: - monolingual source_datasets: - original task_categories: - ner # Dataset Card for c_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary C corpus is a set of annotated data in portuguese for the recognition of named entities, being the extension of the UlyssesNER-Br corpus. ### Supported Tasks and Leaderboards The dataset can be used to train a model for Named Entity Recognition that aims to identify all named entities such as person names, locations, among others, in a text. ### Languages Brazilian Portuguese ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
AmirulOm
null
null
null
false
1
false
AmirulOm/lottie-urls
2022-10-25T10:12:14.000Z
null
false
709fd56c19915e82eafc9bc39780e078daee5e00
[]
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "license:unknown", "size_categories:n<1K", "source_datasets:original", "task_categories:image-segmentation", "task_ids:instance-segmentation" ]
https://huggingface.co/datasets/AmirulOm/lottie-urls/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: [] license: - unknown multilinguality: [] pretty_name: lottie-uri size_categories: - n<1K source_datasets: - original task_categories: - image-segmentation task_ids: - instance-segmentation --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary List of lottiefiles uri for research purposes ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
Davincilee
null
null
null
false
2
false
Davincilee/closure_system_door_inner
2022-04-29T20:59:50.000Z
null
false
d9a925c71de5280a6397b8e433b506a031f95a53
[]
[ "license:lgpl-3.0" ]
https://huggingface.co/datasets/Davincilee/closure_system_door_inner/resolve/main/README.md
--- license: lgpl-3.0 ---
hrithikpiyush
null
null
null
false
11
false
hrithikpiyush/acl-arc
2022-04-26T11:40:41.000Z
null
false
8c7dd451752096e3932fcbbdc051d65be8dbd662
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/hrithikpiyush/acl-arc/resolve/main/README.md
--- license: apache-2.0 ---
eleldar
null
null
null
false
2
false
eleldar/different_sub_normal_datasets
2022-06-16T11:19:15.000Z
null
false
ac97fe2b8719890567bea1fbcf9a5b22594bf88b
[]
[]
https://huggingface.co/datasets/eleldar/different_sub_normal_datasets/resolve/main/README.md
Dataset for API: https://github.com/eleldar/Translation
BigScienceBiasEval
null
@inproceedings{neveol2022french, title={French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English}, author={N{\'e}v{\'e}ol, Aur{\'e}lie and Dupont, Yoann and Bezan{\c{c}}on, Julien and Fort, Kar{\"e}n}, booktitle={ACL 2022-60th Annual Meeting of the Association for Computational Linguistics}, year={2022} }
This is a revised version of CrowS-Pairs that measures stereotypes in language modelling in both English and French.
false
382
false
BigScienceBiasEval/crows_pairs_multilingual
2022-04-26T16:26:28.000Z
null
false
ac47d0f12d6905b94389e937e8f24fae21b9c66c
[]
[ "arxiv:2010.00133", "license:cc-by-sa-4.0" ]
https://huggingface.co/datasets/BigScienceBiasEval/crows_pairs_multilingual/resolve/main/README.md
--- license: cc-by-sa-4.0 --- Original from https://gitlab.inria.fr/french-crows-pairs/acl-2022-paper-data-and-code/-/tree/main/. # Data Statement for CrowS-Pairs-fr > **How to use this document:** > Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years. > For full details, the best source is the original Data Statements paper, here: https://www.aclweb.org/anthology/Q18-1041/ . > Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as "DATASTATEMENT.md". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known. > Only blockquoted content should be deleted; the final about statement should be left intact. Data set name: Crows-Pairs-fr Citation (if available): Névéol A, Dupont Y, Bezançon J, Fort K. French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics - ACL 2022 Data set developer(s): Aurélie Névéol, Yoann Dupont, Julien Bezançon, Karën Fort Data statement author(s): Aurélie Névéol, Yoann Dupont Others who contributed to this document: N/A License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0). ## A. CURATION RATIONALE > *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to. The French part of the corpus was built by first translating the original 1,508 sentence pairs of the English corpus into French. We then adapted the crowdsourcing method described by [Nangia et al. (2020)](https://arxiv.org/pdf/2010.00133) to collect additional sentences expressing a stereotype relevant to the French socio-cultural environment. Data collection is implemented through LanguageARC [(Fiumara et al., 2020)](https://www.aclweb.org/anthology/2020.cllrd-1.1.pdf), a citizen science platform supporting the development of language resources dedicated to social improvement. We created a LanguageARC project (https://languagearc.com/projects/19) to collect these additional sentences. Participants were asked to submit a statement that expressed a stereotype in French along with a selection of ten bias types: the nine bias types offered in CrowS-Pairs and the additional category _other_. We collected 210 additional sentences this way. ## B. LANGUAGE VARIETY/VARIETIES > *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., "English as spoken in Palo Alto, California", or "Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin"). * BCP-47 language tags: fr-FR * Language variety description: French spoken by native French people from metropolitan France. ## C. CONTRIBUTOR DEMOGRAPHIC > ## C. SPEAKER DEMOGRAPHIC > *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include: N/A ## D. ANNOTATOR DEMOGRAPHIC > *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include: Participants to the collection project were recruited through calls for volunteers posted to social media and mailing lists in the French research community. ## E. SPEECH SITUATION N/A ## F. TEXT CHARACTERISTICS > *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified. Collected data is a collection of offensive stereotyped statements in French, they might be upsetting. Along these stereotyped statements are paired anti-stereotyped statements. ## G. RECORDING QUALITY N/A ## H. OTHER > *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset. ## I. PROVENANCE APPENDIX Examples were gathered using the LanguageArc site and by creating a dedicated project: https://languagearc.com/projects/19 ## About this document A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software. Data Statements are from the University of Washington. Contact: [datastatements@uw.edu](mailto:datastatements@uw.edu). This document template is licensed as [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/). This version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the [2020 LREC workshop on Data Statements](https://sites.google.com/uw.edu/data-statements-for-nlp/), by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski.
albertxu
null
null
null
false
2
false
albertxu/CrosswordQA
2022-10-29T23:45:36.000Z
null
false
5e93d44a6d6fb1fe35c41df7af170a8618b23e70
[]
[ "annotations_creators:no-annotation", "language_creators:found", "language:en", "license:unknown", "multilinguality:monolingual", "size_categories:1M<n<10M", "task_categories:question-answering", "task_ids:open-domain-qa" ]
https://huggingface.co/datasets/albertxu/CrosswordQA/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 1M<n<10M task_categories: - question-answering task_ids: - open-domain-qa --- # Dataset Card for CrosswordQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/albertkx/Berkeley-Crossword-Solver - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Albert Xu](mailto:albertxu@usc.edu) and [Eshaan Pathak](mailto:eshaanpathak@berkeley.edu) ### Dataset Summary The CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances ``` { "id": 0, "clue": "Clean-up target", "answer": "mess" } ``` ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]