id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
6.67k
citation
stringlengths
0
10.7k
likes
int64
0
3.66k
downloads
int64
0
8.89M
created
timestamp[us]
card
stringlengths
11
977k
card_len
int64
11
977k
embeddings
list
bigscience-data/roots_ar_ted_talks_iwslt
2022-12-12T11:02:17.000Z
[ "language:ar", "license:cc-by-nc-nd-4.0", "region:us" ]
bigscience-data
null
null
0
3
2022-05-18T09:07:28
--- language: ar license: cc-by-nc-nd-4.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_ar_ted_talks_iwslt # WIT Ted Talks - Dataset uid: `ted_talks_iwslt` ### Description The Web Inventory Talk is a collection of the original Ted talks and their translated version. The translations are available in more than 109+ languages, though the distribution is not uniform. ### Homepage https://github.com/huggingface/datasets/blob/master/datasets/ted_talks_iwslt/README.md ### Licensing - open license - cc-by-nc-4.0: Creative Commons Attribution Non Commercial 4.0 International TED makes its collection of video recordings and transcripts of talks available under the Creative Commons BY-NC-ND license (look here). WIT3 acknowledges the authorship of TED talks (BY condition) and does not redistribute transcripts for commercial purposes (NC). As regards the integrity of the work (ND), WIT3 only changes the format of the container, while preserving the original contents. WIT3 aims to support research on human language processing as well as the diffusion of TED Talks! ### Speaker Locations - Southern Europe - Italy ### Sizes - 0.0305 % of total - 0.0736 % of ar - 0.2002 % of pt - 0.0128 % of zh - 0.2236 % of vi - 0.0330 % of fr - 0.0545 % of es - 0.0122 % of en - 0.3704 % of id - 0.0373 % of indic-hi - 0.0330 % of indic-ta - 0.1393 % of indic-mr - 0.0305 % of ca - 0.1179 % of indic-ur - 0.0147 % of indic-bn - 0.0240 % of indic-ml - 0.0244 % of indic-te - 0.0503 % of indic-gu - 0.0211 % of indic-kn - 0.0274 % of eu - 0.0023 % of indic-as - 0.0001 % of indic-pa ### BigScience processing steps #### Filters applied to: ar - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: pt - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: vi - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: id - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-hi - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ta - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-mr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: ca - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: indic-ur - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-bn - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ml - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-te - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-gu - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-kn - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: eu - dedup_document - filter_remove_empty_docs #### Filters applied to: indic-as - dedup_document - filter_remove_empty_docs #### Filters applied to: indic-pa - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300
4,095
[ [ -0.0361328125, -0.04669189453125, 0.026275634765625, 0.0027866363525390625, -0.0186004638671875, 0.00807952880859375, -0.01093292236328125, -0.022552490234375, 0.04473876953125, 0.01708984375, -0.0594482421875, -0.053924560546875, -0.0501708984375, 0.0180969...
bigscience-data/roots_ar_sanad
2022-12-12T11:02:27.000Z
[ "language:ar", "license:cc-by-4.0", "region:us" ]
bigscience-data
null
null
0
3
2022-05-18T09:07:30
--- language: ar license: cc-by-4.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_ar_sanad # sanad - Dataset uid: `sanad` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 0.1312 % of total - 1.2094 % of ar ### BigScience processing steps #### Filters applied to: ar - dedup_document - dedup_template_soft - filter_remove_empty_docs - remove_html_spans_sanad - filter_small_docs_bytes_300
696
[ [ -0.04925537109375, -0.03350830078125, 0.03277587890625, 0.01271820068359375, -0.0243682861328125, -0.01137542724609375, 0.00020420551300048828, 0.01458740234375, 0.0416259765625, 0.042327880859375, -0.028717041015625, -0.06536865234375, -0.039398193359375, 0...
bigscience-data/roots_en_wikivoyage
2022-12-12T11:03:13.000Z
[ "language:en", "license:cc-by-sa-3.0", "region:us" ]
bigscience-data
null
null
0
3
2022-05-18T09:08:46
--- language: en license: cc-by-sa-3.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_en_wikivoyage # wikivoyage_filtered - Dataset uid: `wikivoyage_filtered` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 0.0334 % of total - 0.1097 % of en - 0.0432 % of fr - 0.0863 % of es - 0.0084 % of zh - 0.0892 % of vi - 0.0464 % of indic-bn - 0.0443 % of pt - 0.0130 % of indic-hi ### BigScience processing steps #### Filters applied to: en - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_en - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: fr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_fr - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: es - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_es - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: zh - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_zhs - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: vi - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_vi - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-bn - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-bn - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: pt - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_pt - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-hi - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-hi - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300
2,580
[ [ -0.043182373046875, -0.036041259765625, 0.01788330078125, 0.0163116455078125, -0.0095672607421875, -0.005817413330078125, -0.01320648193359375, 0.00164031982421875, 0.042083740234375, 0.031494140625, -0.059234619140625, -0.06402587890625, -0.045135498046875, ...
bigscience-data/roots_id_indosum
2022-12-12T11:05:45.000Z
[ "language:id", "license:apache-2.0", "region:us" ]
bigscience-data
null
null
1
3
2022-05-18T09:14:24
--- language: id license: apache-2.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_id_indosum # Indosum - Dataset uid: `indosum` ### Description IndoSum: A New Benchmark Dataset for Indonesian Text Summarization ### Homepage https://github.com/kata-ai/indosum ### Licensing - apache-2.0: Apache License 2.0 Apache License, Version 2.0 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: You must give any other recipients of the Work or Derivative Works a copy of this License; and You must cause any modified files to carry prominent notices stating that You changed the files; and You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ### Speaker Locations - South-eastern Asia - Indonesia ### Sizes - 0.0035 % of total - 1.3157 % of id ### BigScience processing steps #### Filters applied to: id - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300
11,227
[ [ -0.0164794921875, -0.02838134765625, 0.039825439453125, 0.05072021484375, -0.046142578125, -0.0070953369140625, -0.00994873046875, -0.0513916015625, 0.032958984375, 0.05206298828125, -0.033172607421875, -0.041168212890625, -0.05963134765625, 0.02578735351562...
bigscience-data/roots_id_wikimedia
2022-12-12T11:05:50.000Z
[ "language:id", "license:cc-by-sa-3.0", "region:us" ]
bigscience-data
null
null
0
3
2022-05-18T09:14:39
--- language: id license: cc-by-sa-3.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_id_wikimedia # wikimedia_filtered - Dataset uid: `wikimedia_filtered` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 0.0005 % of total - 0.0835 % of id - 0.0126 % of ca - 0.0054 % of pt - 0.0005 % of indic-hi ### BigScience processing steps #### Filters applied to: id - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_id - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: ca - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_ca - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: pt - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_pt - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-hi - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-hi - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300
1,567
[ [ -0.04620361328125, -0.0277862548828125, 0.0295257568359375, 0.026580810546875, -0.0220489501953125, -0.01016998291015625, -0.01441192626953125, 0.00933074951171875, 0.04833984375, 0.028839111328125, -0.07000732421875, -0.06640625, -0.04559326171875, 0.032501...
bigscience-data/roots_id_wikisource
2022-12-12T11:05:55.000Z
[ "language:id", "license:cc-by-sa-3.0", "region:us" ]
bigscience-data
null
null
0
3
2022-05-18T09:14:39
--- language: id license: cc-by-sa-3.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_id_wikisource # wikisource_filtered - Dataset uid: `wikisource_filtered` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 2.6306 % of total - 12.7884 % of fr - 19.8886 % of indic-bn - 20.9966 % of indic-ta - 2.3478 % of ar - 4.7068 % of indic-hi - 18.0998 % of indic-te - 1.7155 % of es - 19.4800 % of indic-kn - 9.1737 % of indic-ml - 17.1771 % of indic-mr - 17.1870 % of indic-gu - 70.3687 % of indic-as - 1.0165 % of pt - 7.8642 % of indic-pa - 1.3501 % of vi - 4.9411 % of indic-or - 0.5307 % of ca - 2.3593 % of id - 1.5928 % of eu ### BigScience processing steps #### Filters applied to: fr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: indic-bn - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ta - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: ar - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-hi - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-te - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: es - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: indic-kn - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - remove_wiki_mojibake - filter_small_docs_bytes_300 #### Filters applied to: indic-ml - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-mr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-gu - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-as - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs #### Filters applied to: pt - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-pa - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: vi - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-or - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs #### Filters applied to: ca - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: id - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: eu - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs
4,382
[ [ -0.049285888671875, -0.0423583984375, 0.022705078125, 0.01117706298828125, -0.007110595703125, -0.0001322031021118164, -0.01033782958984375, -0.01323699951171875, 0.04278564453125, 0.0208587646484375, -0.053436279296875, -0.060943603515625, -0.040771484375, ...
bigscience-data/roots_id_ted_talks_iwslt
2022-12-12T11:06:05.000Z
[ "language:id", "license:cc-by-nc-nd-4.0", "region:us" ]
bigscience-data
null
null
0
3
2022-05-18T09:14:40
--- language: id license: cc-by-nc-nd-4.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_id_ted_talks_iwslt # WIT Ted Talks - Dataset uid: `ted_talks_iwslt` ### Description The Web Inventory Talk is a collection of the original Ted talks and their translated version. The translations are available in more than 109+ languages, though the distribution is not uniform. ### Homepage https://github.com/huggingface/datasets/blob/master/datasets/ted_talks_iwslt/README.md ### Licensing - open license - cc-by-nc-4.0: Creative Commons Attribution Non Commercial 4.0 International TED makes its collection of video recordings and transcripts of talks available under the Creative Commons BY-NC-ND license (look here). WIT3 acknowledges the authorship of TED talks (BY condition) and does not redistribute transcripts for commercial purposes (NC). As regards the integrity of the work (ND), WIT3 only changes the format of the container, while preserving the original contents. WIT3 aims to support research on human language processing as well as the diffusion of TED Talks! ### Speaker Locations - Southern Europe - Italy ### Sizes - 0.0305 % of total - 0.0736 % of ar - 0.2002 % of pt - 0.0128 % of zh - 0.2236 % of vi - 0.0330 % of fr - 0.0545 % of es - 0.0122 % of en - 0.3704 % of id - 0.0373 % of indic-hi - 0.0330 % of indic-ta - 0.1393 % of indic-mr - 0.0305 % of ca - 0.1179 % of indic-ur - 0.0147 % of indic-bn - 0.0240 % of indic-ml - 0.0244 % of indic-te - 0.0503 % of indic-gu - 0.0211 % of indic-kn - 0.0274 % of eu - 0.0023 % of indic-as - 0.0001 % of indic-pa ### BigScience processing steps #### Filters applied to: ar - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: pt - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: zh - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: vi - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: id - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-hi - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ta - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-mr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: ca - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: indic-ur - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-bn - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ml - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-te - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-gu - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-kn - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: eu - dedup_document - filter_remove_empty_docs #### Filters applied to: indic-as - dedup_document - filter_remove_empty_docs #### Filters applied to: indic-pa - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300
4,095
[ [ -0.03533935546875, -0.046905517578125, 0.0265350341796875, 0.0030918121337890625, -0.01861572265625, 0.007274627685546875, -0.01145172119140625, -0.0225677490234375, 0.0450439453125, 0.016510009765625, -0.05950927734375, -0.054595947265625, -0.04931640625, 0...
bigscience-data/roots_id_wikiquote
2022-12-12T11:06:10.000Z
[ "language:id", "license:cc-by-sa-3.0", "region:us" ]
bigscience-data
null
null
0
3
2022-05-18T09:14:41
--- language: id license: cc-by-sa-3.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_id_wikiquote # wikiquote_filtered - Dataset uid: `wikiquote_filtered` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 0.0462 % of total - 0.1697 % of en - 0.0326 % of fr - 0.0216 % of ar - 0.0066 % of zh - 0.0833 % of pt - 0.0357 % of es - 0.0783 % of indic-ta - 0.0361 % of indic-hi - 0.0518 % of ca - 0.0405 % of vi - 0.0834 % of indic-ml - 0.0542 % of indic-te - 0.1172 % of indic-gu - 0.0634 % of indic-kn - 0.0539 % of id - 0.0454 % of indic-ur - 0.0337 % of indic-mr - 0.0347 % of eu ### BigScience processing steps #### Filters applied to: en - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_en - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: fr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_fr - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: ar - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_ar - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: zh - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_zhs - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: pt - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_pt - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: es - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_es - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: indic-ta - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-ta - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-hi - dedup_document - filter_remove_empty_docs - split_sentences_indic-hi - dedup_template_soft - filter_small_docs_bytes_300 #### Filters applied to: ca - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_ca - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: vi - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_vi - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-ml - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-ml - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-te - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-te - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-gu - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-gu - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-kn - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-kn - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: id - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_id - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-ur - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-mr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-mr - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: eu - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_eu - dedup_template_soft - replace_newline_with_space
4,987
[ [ -0.04193115234375, -0.051605224609375, 0.017364501953125, 0.01396942138671875, -0.00315093994140625, 0.007221221923828125, -0.01287078857421875, -0.01541900634765625, 0.047515869140625, 0.021087646484375, -0.048583984375, -0.059661865234375, -0.045562744140625, ...
bigscience-data/roots_id_wikibooks
2022-12-12T11:06:16.000Z
[ "language:id", "license:cc-by-sa-3.0", "region:us" ]
bigscience-data
null
null
0
3
2022-05-18T09:14:41
--- language: id license: cc-by-sa-3.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_id_wikibooks # wikibooks_filtered - Dataset uid: `wikibooks_filtered` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 0.0897 % of total - 0.2591 % of en - 0.0965 % of fr - 0.1691 % of es - 0.2834 % of indic-hi - 0.2172 % of pt - 0.0149 % of zh - 0.0279 % of ar - 0.1374 % of vi - 0.5025 % of id - 0.3694 % of indic-ur - 0.5744 % of eu - 0.0769 % of ca - 0.0519 % of indic-ta - 0.1470 % of indic-mr - 0.0751 % of indic-te - 0.0156 % of indic-bn - 0.0476 % of indic-ml - 0.0087 % of indic-pa ### BigScience processing steps #### Filters applied to: en - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_en - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: fr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_fr - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: es - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_es - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: indic-hi - dedup_document - filter_remove_empty_docs - split_sentences_indic-hi - dedup_template_soft - filter_small_docs_bytes_300 #### Filters applied to: pt - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_pt - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: zh - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_zhs - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: ar - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_ar - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: vi - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_vi - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: id - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_id - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-ur - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: eu - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_eu - dedup_template_soft - replace_newline_with_space #### Filters applied to: ca - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_ca - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: indic-ta - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-ta - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-mr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-mr - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-te - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-te - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-bn - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-bn - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-ml - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-ml - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: indic-pa - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-pa - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300
4,987
[ [ -0.0447998046875, -0.04803466796875, 0.015838623046875, 0.013427734375, 0.0005078315734863281, 0.0008835792541503906, -0.0118560791015625, -0.0159149169921875, 0.044647216796875, 0.019866943359375, -0.045623779296875, -0.057525634765625, -0.042022705078125, ...
bigscience-data/roots_vi_wikipedia
2022-12-12T11:16:52.000Z
[ "language:vi", "license:cc-by-sa-3.0", "region:us" ]
bigscience-data
null
null
0
3
2022-05-18T09:19:39
--- language: vi license: cc-by-sa-3.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_vi_wikipedia # wikipedia - Dataset uid: `wikipedia` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 3.2299 % of total - 4.2071 % of en - 5.6773 % of ar - 3.3416 % of fr - 5.2815 % of es - 12.4852 % of ca - 0.4288 % of zh - 0.4286 % of zh - 5.4743 % of indic-bn - 8.9062 % of indic-ta - 21.3313 % of indic-te - 4.4845 % of pt - 4.0493 % of indic-hi - 11.3163 % of indic-ml - 22.5300 % of indic-ur - 4.4902 % of vi - 16.9916 % of indic-kn - 24.7820 % of eu - 11.6241 % of indic-mr - 9.8749 % of id - 9.3489 % of indic-pa - 9.4767 % of indic-gu - 24.1132 % of indic-as - 5.3309 % of indic-or ### BigScience processing steps #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: ar - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: ca - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: zh #### Filters applied to: zh #### Filters applied to: indic-bn - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ta - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-te - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: pt - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-hi - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ml - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-ur - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: vi - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-kn - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: eu - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs #### Filters applied to: indic-mr - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: id - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-pa - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-gu - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: indic-as - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs #### Filters applied to: indic-or - filter_wiki_user_titles - dedup_document - filter_remove_empty_docs
3,635
[ [ -0.04437255859375, -0.0382080078125, 0.022003173828125, 0.01157379150390625, -0.01363372802734375, -0.005748748779296875, -0.01500701904296875, -0.00930023193359375, 0.045166015625, 0.021209716796875, -0.053253173828125, -0.05877685546875, -0.045501708984375, ...
bigscience-data/roots_zh_wikinews
2022-12-12T11:17:30.000Z
[ "language:zh", "license:cc-by-sa-3.0", "region:us" ]
bigscience-data
null
null
2
3
2022-05-18T09:20:09
--- language: zh license: cc-by-sa-3.0 extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_zh_wikinews # wikinews_filtered - Dataset uid: `wikinews_filtered` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 0.0307 % of total - 0.0701 % of ar - 0.3036 % of pt - 0.0271 % of en - 0.0405 % of fr - 0.2119 % of indic-ta - 0.0081 % of zh - 0.0510 % of es - 0.0725 % of ca ### BigScience processing steps #### Filters applied to: ar - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_ar - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: pt - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_pt - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: en - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_en - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: fr - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - dedup_template_soft - filter_remove_empty_docs - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: indic-ta - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_indic-ta - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_300 #### Filters applied to: zh - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_zhs - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: es - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_es - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024 #### Filters applied to: ca - filter_wiki_user_titles - filter_wiki_non_text_type - dedup_document - filter_remove_empty_docs - split_sentences_ca - dedup_template_soft - replace_newline_with_space - filter_small_docs_bytes_1024
2,536
[ [ -0.0391845703125, -0.0400390625, 0.0242767333984375, 0.01511383056640625, -0.0089569091796875, -0.008056640625, -0.01302337646484375, 0.0022563934326171875, 0.046295166015625, 0.0308990478515625, -0.055877685546875, -0.06658935546875, -0.045684814453125, 0.0...
spoiled/ecqa_classify_94
2022-05-18T13:53:37.000Z
[ "region:us" ]
spoiled
null
null
0
3
2022-05-18T12:34:54
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Rexhaif/xnli-en-ru
2022-05-28T20:53:05.000Z
[ "region:us" ]
Rexhaif
null
null
0
3
2022-05-22T23:28:45
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Aniemore/REPV
2022-07-01T16:41:13.000Z
[ "task_categories:audio-classification", "task_ids:audio-emotion-recognition", "annotations_creators:crowdsourced", "language_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ru", "license:...
Aniemore
null
null
2
3
2022-05-26T22:15:17
--- annotations_creators: - crowdsourced language_creators: - expert-generated - crowdsourced language: - ru license: - mit multilinguality: - monolingual pretty_name: Russian Emotional Phonetic Voices size_categories: - 1K<n<10K source_datasets: - original task_categories: - audio-classification task_ids: - audio-emotion-recognition --- # Citations ``` @misc{Aniemore, author = {Артем Аментес, Илья Лубенец, Никита Давидчук}, title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека}, year = {2022}, publisher = {Hugging Face}, journal = {Hugging Face Hub}, howpublished = {\url{https://huggingface.com/aniemore/Aniemore}}, email = {hello@socialcode.ru} } ```
737
[ [ -0.0200958251953125, -0.044525146484375, 0.0352783203125, 0.02734375, 0.0022945404052734375, 0.008941650390625, 0.0218505859375, -0.029998779296875, 0.06341552734375, -0.0089874267578125, -0.032073974609375, -0.0433349609375, -0.0435791015625, 0.031921386718...
PoolC/1-fold-clone-detection-600k-5fold
2022-06-01T06:49:02.000Z
[ "region:us" ]
PoolC
null
null
2
3
2022-06-01T06:36:28
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
tomekkorbak/python-github-code
2022-06-12T22:20:27.000Z
[ "region:us" ]
tomekkorbak
null
null
1
3
2022-06-01T14:54:29
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
BeIR/climate-fever-generated-queries
2022-10-23T06:09:20.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
0
3
2022-06-06T22:07:02
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
13,988
[ [ -0.0396728515625, -0.03985595703125, 0.01094818115234375, 0.0036602020263671875, 0.00423431396484375, 0.00009590387344360352, -0.0081939697265625, -0.0188751220703125, 0.021697998046875, 0.00595855712890625, -0.034332275390625, -0.0545654296875, -0.0263824462890...
BeIR/dbpedia-entity-generated-queries
2022-10-23T06:09:39.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
1
3
2022-06-06T22:21:33
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
13,988
[ [ -0.0396728515625, -0.03985595703125, 0.01094818115234375, 0.0036602020263671875, 0.00423431396484375, 0.00009590387344360352, -0.0081939697265625, -0.0188751220703125, 0.021697998046875, 0.00595855712890625, -0.034332275390625, -0.0545654296875, -0.0263824462890...
Unso/ted2020_pairs_en_x
2022-06-10T07:33:19.000Z
[ "region:us" ]
Unso
null
null
0
3
2022-06-09T07:29:42
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
florentgbelidji/autotrain-data-quality-customer-reviews
2022-10-25T10:29:24.000Z
[ "task_categories:text-classification", "language:en", "region:us" ]
florentgbelidji
null
null
0
3
2022-06-09T09:35:36
--- language: - en task_categories: - text-classification --- # AutoTrain Dataset for project: quality-customer-reviews ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project quality-customer-reviews. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": " Love this truck, I think it is light years better than the competition. I have driven or owned all [...]", "target": 1 }, { "text": " I purchased this to haul our 4 horse trailer since the standard iterations of the domestic vehicles[...]", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=5, names=['good', 'great', 'ok', 'poor', 'terrible'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 9166 | | valid | 2295 |
1,185
[ [ -0.042694091796875, 0.0011606216430664062, 0.01134490966796875, 0.0290069580078125, -0.021636962890625, 0.0198822021484375, -0.0034008026123046875, -0.031402587890625, -0.00000959634780883789, 0.02099609375, -0.03363037109375, -0.047821044921875, -0.032043457031...
AlekseyKorshuk/fantasy-books
2022-06-10T04:36:42.000Z
[ "region:us" ]
AlekseyKorshuk
null
null
2
3
2022-06-10T04:36:14
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
AlekseyKorshuk/drama-books
2022-06-11T13:26:37.000Z
[ "region:us" ]
AlekseyKorshuk
null
null
1
3
2022-06-11T13:26:35
Entry not found
15
[ [ -0.0213775634765625, -0.014984130859375, 0.05718994140625, 0.0288543701171875, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.005062103271484375, 0.051361083984375, 0.016998291015625, -0.0521240234375, -0.01496124267578125, -0.0604248046875, 0.037...
angie-chen55/javascript-github-code
2022-06-15T08:08:09.000Z
[ "region:us" ]
angie-chen55
null
null
0
3
2022-06-15T07:36:45
Entry not found
15
[ [ -0.0213775634765625, -0.014984130859375, 0.05718994140625, 0.0288543701171875, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.005062103271484375, 0.051361083984375, 0.016998291015625, -0.0521240234375, -0.01496124267578125, -0.0604248046875, 0.037...
omarxadel/MaWPS-ar
2022-07-12T15:31:07.000Z
[ "task_categories:text2text-generation", "task_ids:explanation-generation", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "language:en", "language:ar", "license:mit", "region:us" ]
omarxadel
null
null
0
3
2022-06-15T20:39:14
--- annotations_creators: - crowdsourced language: - en - ar language_creators: - found license: - mit multilinguality: - multilingual pretty_name: MAWPS_ar size_categories: - 1K<n<10K source_datasets: [] task_categories: - text2text-generation task_ids: - explanation-generation --- # Dataset Card for MAWPS_ar ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary MAWPS: A Math Word Problem Repository ### Supported Tasks Math Word Problem Solving ### Languages Supports Arabic and English ## Dataset Structure ### Data Fields - `text_en`: a `string` feature. - `text_ar`: a `string` feature. - `eqn`: a `string` feature. ### Data Splits |train|validation|test| |----:|---------:|---:| | 3636| 1040| 520| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [Rik Koncel-Kedziorski**, Subhro Roy**, Aida Amini, Nate Kushman and Hannaneh Hajishirzi.](https://aclanthology.org/N16-1136.pdf) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Contributions Special thanks to Associate Professor Marwan Torki and all my colleagues in CC491N (NLP) class for helping me translate this dataset.
4,012
[ [ -0.04901123046875, -0.036529541015625, 0.00812530517578125, 0.017578125, -0.0140228271484375, 0.00035643577575683594, -0.01404571533203125, -0.031890869140625, 0.0460205078125, 0.040191650390625, -0.06365966796875, -0.06658935546875, -0.059326171875, 0.00612...
BeIR/hotpotqa-generated-queries
2022-10-23T06:15:30.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
0
3
2022-06-17T13:20:35
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
13,988
[ [ -0.0396728515625, -0.03985595703125, 0.01094818115234375, 0.00363922119140625, 0.0042266845703125, 0.00008571147918701172, -0.0081939697265625, -0.018890380859375, 0.0216827392578125, 0.00595855712890625, -0.034332275390625, -0.054534912109375, -0.02639770507812...
valurank/Adult-content-dataset
2023-01-19T02:40:10.000Z
[ "task_categories:text-classification", "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
valurank
null
null
3
3
2022-06-17T21:54:46
--- license: - other language: - en multilinguality: - monolingual task_categories: - text-classification task_ids: [] --- # Dataset Card for Adult_Content_Detection ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) ## Dataset Description 850 Articles descriptions classified into two different categories namely: Adult, and Non_Adult ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of two columns namely Description and Category. The Description column consists of the overview of the article and the Category column consists of the class each article belongs to ## Source Data The dataset is scrapped across different platforms
803
[ [ -0.03192138671875, -0.0260162353515625, -0.0144500732421875, 0.019927978515625, -0.038543701171875, 0.01401519775390625, -0.0053558349609375, -0.02655029296875, 0.015869140625, 0.048614501953125, -0.04608154296875, -0.0775146484375, -0.032196044921875, 0.028...
huggingartists/headie-one
2022-10-25T10:32:29.000Z
[ "language:en", "huggingartists", "lyrics", "region:us" ]
huggingartists
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
1
3
2022-06-20T15:09:53
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/headie-one" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.679898 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/f803e312226f5034989742ff1fb4b583.1000x1000x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/headie-one"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">Headie One</div> <a href="https://genius.com/artists/headie-one"> <div style="text-align: center; font-size: 14px;">@headie-one</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/headie-one). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/headie-one") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |224| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/headie-one") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2022 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
7,180
[ [ -0.04931640625, -0.0400390625, 0.004161834716796875, 0.02386474609375, -0.0188140869140625, -0.0030040740966796875, -0.02191162109375, -0.032501220703125, 0.0662841796875, 0.0294647216796875, -0.0711669921875, -0.062042236328125, -0.043701171875, 0.008575439...
Nexdata/Indian_English_Speech_Data_by_Mobile_Phone
2023-08-28T08:39:25.000Z
[ "region:us" ]
Nexdata
null
null
0
3
2022-06-22T07:47:10
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/Indian_English_Speech_Data_by_Mobile_Phone ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/940?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Indian English audio data captured by mobile phones, 1,012 hours in total, recorded by 2,100 Indian native speakers. The recorded text is designed by linguistic experts, covering generic, interactive, on-board, home and other categories. The text has been proofread manually with high accuracy; this data set can be used for automatic speech recognition, machine translation, and voiceprint recognition. For more details, please refer to the link: https://www.nexdata.ai/datasets/940?source=Huggingface ### Supported Tasks and Leaderboards automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR). ### Languages Indian English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
3,290
[ [ -0.028900146484375, -0.037841796875, -0.0122222900390625, 0.0302581787109375, -0.01346588134765625, 0.00995635986328125, -0.0231475830078125, -0.0325927734375, 0.041168212890625, 0.035919189453125, -0.0487060546875, -0.05682373046875, -0.046295166015625, 0.0...
polinaeterna/audiofolder_example
2022-09-22T17:15:18.000Z
[ "region:us" ]
polinaeterna
null
null
0
3
2022-06-22T12:07:33
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
eugenetanjc/speech_accent_1000
2022-06-23T13:58:26.000Z
[ "region:us" ]
eugenetanjc
null
null
0
3
2022-06-23T13:57:02
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
PedroDKE/LibriS2S
2023-03-23T13:28:39.000Z
[ "task_categories:text-to-speech", "task_categories:automatic-speech-recognition", "task_categories:translation", "multilinguality:multilingual", "size_categories:10K<n<100K", "language:en", "language:de", "license:cc-by-nc-sa-4.0", "LibriS2S", "LibrivoxDeEn", "Speech-to-Speech translation", "L...
PedroDKE
null
null
1
3
2022-06-23T14:39:57
--- annotations_creators: [] language: - en - de language_creators: [] license: - cc-by-nc-sa-4.0 multilinguality: - multilingual pretty_name: LibriS2S German-English Speech and Text pairs size_categories: - 10K<n<100K source_datasets: [] tags: - LibriS2S - LibrivoxDeEn - Speech-to-Speech translation - LREC2022 task_categories: - text-to-speech - automatic-speech-recognition - translation task_ids: [] --- # LibriS2S This repo contains scripts and alignment data to create a dataset build further upon [librivoxDeEn](https://www.cl.uni-heidelberg.de/statnlpgroup/librivoxdeen/) such that it contains (German audio, German transcription, English audio, English transcription) quadruplets and can be used for Speech-to-Speech translation research. Because of this, the alignments are released under the same [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/) <div> These alignments were collected by downloading the English audiobooks and using [aeneas](https://github.com/readbeyond/aeneas) to align the book chapters to the transcripts. For more information read the original [paper](https://arxiv.org/abs/2204.10593) (Presented at LREC 2022) ### The data The English/German audio are available in the folder EN/DE respectively and can be downloaded from [this onedrive](https://onedrive.live.com/embed?cid=DCE49ACC2BDA7D8C&resid=DCE49ACC2BDA7D8C%2115663&authkey=ANmUz8gRUoyxmjk). In case there are any problems with the download, feel free to open an issue here or on [GitHub](https://github.com/PedroDKE/LibriS2S). <br/> The repo structure is as follow: - Alignments : Contains all the alignments for each book and chapter - DE : Contains the German audio for each chapter per book. - EN : Contains the English audio for each chapter per book. - Example : contains example files on for the scraping and aligning explanations that were used to build this dataset. - LibrivoxDeEn_alignments : Contains the base alignments from the LibrivoxDeEn dataset. <br/> In case you feel a part of the data is missing, feel free to open an issue! The full zipfile is about 52 GB of size. ### Scraping a book from Librivox To download all chapters from a librivox url the following command can be used: ``` python scrape_audio_from_librivox.py \ --url https://librivox.org/undine-by-friedrich-de-la-motte-fouque/ \ --save_dir ./examples ``` ### Allign a book from Librivox with the text from LibrivoxDeEn To allign the previously downloaded book with the txt files and tsv tables provided by LibrivoxDeEn the following command, based on the example provided with this repo, can be used: ``` python align_text_and_audio.py \ --text_dir ./example/en_text/ \ --audio_path ./example/audio_chapters/ \ --aeneas_path ./example/aeneas/ \ --en_audio_export_path ./example/sentence_level_audio/ \ --total_alignment_path ./example/bi-lingual-alignment/ \ --librivoxdeen_alignment ./example/undine_data.tsv \ --aeneas_head_max 120 \ --aeneas_tail_min 5 \ ``` **note:** the example folder in this repo already contains the first two chapters from [Undine](https://librivox.org/undine-by-friedrich-de-la-motte-fouque/) scraped from librivox and their transcripts and (modified to only contain the first 2 chapters) tsv table retrieved from LibrivoxDeEn. Additional data to align can be scraped by using the same file shown previously and combined with the provided data from LibriVoxDeEn Additionally with this repo the full alignment for the 8 following books with following LibrivoxDeEn id's are also given: [9](https://librivox.org/the-picture-of-dorian-gray-1891-version-by-oscar-wilde/), [10](https://librivox.org/pandoras-box-by-frank-wedekind/), [13](https://librivox.org/survivors-of-the-chancellor-by-jules-verne/), [18](https://librivox.org/undine-by-friedrich-de-la-motte-fouque/), [23](https://librivox.org/around-the-world-in-80-days-by-jules-verne/), [108](https://librivox.org/elective-affinities-by-johann-wolfgang-von-goethe/), [110](https://librivox.org/candide-by-voltaire-3/), [120](https://librivox.org/the-metamorphosis-by-franz-kafka/). Other books such as [11](https://librivox.org/the-castle-of-otranto-by-horace-walpole/), [36](https://librivox.org/the-rider-on-the-white-horse-by-theodor-storm/), [67](https://librivox.org/frankenstein-or-the-modern-prometheus-1818-by-mary-wollstonecraft-shelley/) and [54](https://librivox.org/white-nights-other-stories-by-fyodor-dostoyevsky/) are also inside of the librivoxDeEn dataset but the chapters do not correspond in a 1:1 mannner(for example: the German version of book 67 has 27 chapters but the English version has 29 and thus need to be re-aligned before the allignment script in this repo will work). Therefore these alignments are given but might have be different if you scrape them yourselves as the re-alignments might be different for you. ### Metrics on the alignment given in this repo. Using the alignments given in this repo some metrics were collected and quickly displayed here, for this table and the next figure the books which were manually alligned, although provided in the zip, were not accounted for, but the full table can be found in the original paper. | | German | English | | :---: | :-: | :-: | |number of files | 18868 | 18868 | |total time (hh:mm:ss) | 39:11:08 | 40:52:31 | |Speakers | 41 |22 | note: the speakers were counted for each book seperatly so some speakers might be counter more than once. the number of hours for each book aligned in this repo:<br> <img src="https://user-images.githubusercontent.com/43861296/122250648-1f5f7f80-ceca-11eb-84fd-344a2261bf47.png" width="500"> when using this work, please cite the original paper and the LibrivoxDeEn authors ``` @inproceedings{jeuris-niehues-2022-libris2s, title = "{L}ibri{S}2{S}: A {G}erman-{E}nglish Speech-to-Speech Translation Corpus", author = "Jeuris, Pedro and Niehues, Jan", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.98", pages = "928--935", abstract = "Recently, we have seen an increasing interest in the area of speech-to-text translation. This has led to astonishing improvements in this area. In contrast, the activities in the area of speech-to-speech translation is still limited, although it is essential to overcome the language barrier. We believe that one of the limiting factors is the availability of appropriate training data. We address this issue by creating LibriS2S, to our knowledge the first publicly available speech-to-speech training corpus between German and English. For this corpus, we used independently created audio for German and English leading to an unbiased pronunciation of the text in both languages. This allows the creation of a new text-to-speech and speech-to-speech translation model that directly learns to generate the speech signal based on the pronunciation of the source language. Using this created corpus, we propose Text-to-Speech models based on the example of the recently proposed FastSpeech 2 model that integrates source language information. We do this by adapting the model to take information such as the pitch, energy or transcript from the source speech as additional input.", } ``` ``` @article{beilharz19, title = {LibriVoxDeEn: A Corpus for German-to-English Speech Translation and Speech Recognition}, author = {Beilharz, Benjamin and Sun, Xin and Karimova, Sariya and Riezler, Stefan}, journal = {Proceedings of the Language Resources and Evaluation Conference}, journal-abbrev = {LREC}, year = {2020}, city = {Marseille, France}, url = {https://arxiv.org/pdf/1910.07924.pdf} } ```
7,871
[ [ -0.0141448974609375, -0.005542755126953125, 0.0269775390625, -0.01198577880859375, -0.0173187255859375, -0.0184478759765625, -0.0032558441162109375, -0.0287933349609375, 0.0157623291015625, 0.04144287109375, -0.040252685546875, -0.04180908203125, -0.015235900878...
rkstgr/mtg-jamendo
2022-07-22T12:56:25.000Z
[ "size_categories:10K<n<100K", "source_datasets:original", "license:apache-2.0", "region:us" ]
rkstgr
Repackaging of the MTG Jamendo dataset. We present the MTG-Jamendo Dataset, a new open dataset for music auto-tagging. It is built using music available at Jamendo under Creative Commons licenses and tags provided by content creators. The dataset contains over 55,000 full audio tracks with 195 tags from genre, instrument, and mood/theme categories.
@conference {bogdanov2019mtg, author = "Bogdanov, Dmitry and Won, Minz and Tovstogan, Philip and Porter, Alastair and Serra, Xavier", title = "The MTG-Jamendo Dataset for Automatic Music Tagging", booktitle = "Machine Learning for Music Discovery Workshop, International Conference on Machine Learning (ICML 2019)", year = "2019", address = "Long Beach, CA, United States", url = "http://hdl.handle.net/10230/42015" }
2
3
2022-06-24T13:51:38
--- license: - apache-2.0 size_categories: - 10K<n<100K source_datasets: - original pretty_name: MTG Jamendo --- # Dataset Card for MTG Jamendo Dataset ## Dataset Description - **Repository:** [MTG Jamendo dataset repository](https://github.com/MTG/mtg-jamendo-dataset) ### Dataset Summary MTG-Jamendo Dataset, a new open dataset for music auto-tagging. It is built using music available at Jamendo under Creative Commons licenses and tags provided by content uploaders. The dataset contains over 55,000 full audio tracks with 195 tags from genre, instrument, and mood/theme categories. We provide elaborated data splits for researchers and report the performance of a simple baseline approach on five different sets of tags: genre, instrument, mood/theme, top-50, and overall. ## Dataset structure ### Data Fields - `id`: an integer containing the id of the track - `artist_id`: an integer containing the id of the artist - `album_id`: an integer containing the id of the album - `duration_in_sec`: duration of the track as a float - `genres`: list of strings, describing genres the track is assigned to - `instruments`: list of strings for the main instruments of the track - `moods`: list of strings, describing the moods the track is assigned to - `audio`: audio of the track ### Data Splits This dataset has 2 balanced splits: _train_ (90%) and _validation_ (10%) ### Licensing Information This dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @conference {bogdanov2019mtg, author = "Bogdanov, Dmitry and Won, Minz and Tovstogan, Philip and Porter, Alastair and Serra, Xavier", title = "The MTG-Jamendo Dataset for Automatic Music Tagging", booktitle = "Machine Learning for Music Discovery Workshop, International Conference on Machine Learning (ICML 2019)", year = "2019", address = "Long Beach, CA, United States", url = "http://hdl.handle.net/10230/42015" } ```
1,998
[ [ -0.06610107421875, -0.0116119384765625, 0.0017271041870117188, 0.0203704833984375, -0.031005859375, 0.004718780517578125, -0.0305023193359375, -0.0276031494140625, 0.023773193359375, 0.0190887451171875, -0.06561279296875, -0.0810546875, -0.034393310546875, 0...
rjac/kaggle-entity-annotated-corpus-ner-dataset-oversampled
2022-06-26T01:48:24.000Z
[ "region:us" ]
rjac
null
null
0
3
2022-06-24T20:32:51
this dataset is the same as [rjac/kaggle-entity-annotated-corpus-ner-dataset](https://huggingface.co/datasets/rjac/kaggle-entity-annotated-corpus-ner-dataset) with oversampled instances of 'ART', 'EVE'and 'NAT' entities (25K of all three classes).
249
[ [ -0.046356201171875, -0.052886962890625, 0.006671905517578125, -0.00685882568359375, 0.0205535888671875, 0.019744873046875, -0.0070343017578125, -0.048370361328125, 0.0570068359375, 0.07318115234375, -0.045318603515625, -0.0268096923828125, -0.030120849609375, ...
autoevaluate/xsum-sample
2022-08-22T13:02:13.000Z
[ "region:us" ]
autoevaluate
null
null
0
3
2022-06-25T08:58:40
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
sophieb/dynamically_generated_hate_speech_dataset
2022-06-25T18:02:18.000Z
[ "region:us" ]
sophieb
null
null
0
3
2022-06-25T17:48:05
# Dataset card for dynamically generated dataset hate speech detection ## Dataset summary This dataset that was dynamically generated for training and improving hate speech detection models. A group of trained annotators generated and labeled challenging examples so that hate speech models could be tricked and consequently improved. This dataset contains about 40,000 examples of which 54% are labeled as hate speech. It also provides the target of hate speech, including vulnerable, marginalized, and discriminated groups. Overall, this is a balanced dataset which makes it different from the already available hate speech datasets you can find on the web. This dataset was presented in the article [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection published](https://aclanthology.org/2021.acl-long.132.pdf) in 2021. The article describes the process for generating and annotating the data. Also, it describes how they used the generated data for training and improving hate speech detection models. The full author list is the following: Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook), Zeerak Waseem (University of Sheffield), and Douwe Kiela (Facebook).
1,226
[ [ -0.036773681640625, -0.054962158203125, 0.00276947021484375, -0.03009033203125, -0.0026226043701171875, 0.0149383544921875, -0.016326904296875, -0.050048828125, 0.004848480224609375, 0.0263519287109375, -0.05645751953125, -0.0465087890625, -0.03936767578125, ...
ConvLab/metalwoz
2022-11-25T09:11:36.000Z
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "region:us" ]
ConvLab
null
null
1
3
2022-06-28T01:51:55
--- language: - en license: [] multilinguality: - monolingual pretty_name: MetaLWOZ size_categories: - 10K<n<100K task_categories: - conversational --- # Dataset Card for MetaLWOZ - **Repository:** https://www.microsoft.com/en-us/research/project/metalwoz/ - **Paper:** https://www.microsoft.com/en-us/research/publication/results-of-the-multi-domain-task-completion-dialog-challenge/ - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('metalwoz') ontology = load_ontology('metalwoz') database = load_database('metalwoz') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary This large dataset was created by crowdsourcing 37,884 goal-oriented dialogs, covering 227 tasks in 47 domains. Domains include bus schedules, apartment search, alarm setting, banking, and event reservation. Each dialog was grounded in a scenario with roles, pairing a person acting as the bot and a person acting as the user. (This is the Wizard of Oz reference—using people behind the curtain who act as the machine). Each pair were given a domain and a task, and instructed to converse for 10 turns to satisfy the user’s queries. For example, if a user asked if a bus stop was operational, the bot would respond that the bus stop had been moved two blocks north, which starts a conversation that addresses the user’s actual need. - **How to get the transformed data from original data:** - Download [metalwoz-v1.zip](https://www.microsoft.com/en-us/download/58389) and [metalwoz-test-v1.zip](https://www.microsoft.com/en-us/download/100639). - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - `CITI_INFO`, `HOME_BOT`, `NAME_SUGGESTER`, and `TIME_ZONE` are randomly selected as the valiation domains. - Remove the first utterance by the system since it is "Hello how may I help you?" in most case. - Add goal description according to the original task description: user_role+user_prompt+system_role+system_prompt. - **Annotations:** - domain, goal ### Supported Tasks and Leaderboards RG, User simulator ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 34261 | 357092 | 10.42 | 7.48 | 1 | - | - | - | - | | validation | 3623 | 37060 | 10.23 | 6.59 | 1 | - | - | - | - | | test | 2319 | 23882 | 10.3 | 7.96 | 1 | - | - | - | - | | all | 40203 | 418034 | 10.4 | 7.43 | 1 | - | - | - | - | 51 domains: ['AGREEMENT_BOT', 'ALARM_SET', 'APARTMENT_FINDER', 'APPOINTMENT_REMINDER', 'AUTO_SORT', 'BANK_BOT', 'BUS_SCHEDULE_BOT', 'CATALOGUE_BOT', 'CHECK_STATUS', 'CITY_INFO', 'CONTACT_MANAGER', 'DECIDER_BOT', 'EDIT_PLAYLIST', 'EVENT_RESERVE', 'GAME_RULES', 'GEOGRAPHY', 'GUINESS_CHECK', 'HOME_BOT', 'HOW_TO_BASIC', 'INSURANCE', 'LIBRARY_REQUEST', 'LOOK_UP_INFO', 'MAKE_RESTAURANT_RESERVATIONS', 'MOVIE_LISTINGS', 'MUSIC_SUGGESTER', 'NAME_SUGGESTER', 'ORDER_PIZZA', 'PET_ADVICE', 'PHONE_PLAN_BOT', 'PHONE_SETTINGS', 'PLAY_TIMES', 'POLICY_BOT', 'PRESENT_IDEAS', 'PROMPT_GENERATOR', 'QUOTE_OF_THE_DAY_BOT', 'RESTAURANT_PICKER', 'SCAM_LOOKUP', 'SHOPPING', 'SKI_BOT', 'SPORTS_INFO', 'STORE_DETAILS', 'TIME_ZONE', 'UPDATE_CALENDAR', 'UPDATE_CONTACT', 'WEATHER_CHECK', 'WEDDING_PLANNER', 'WHAT_IS_IT', 'BOOKING_FLIGHT', 'HOTEL_RESERVE', 'TOURISM', 'VACATION_IDEAS'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @inproceedings{li2020results, author = {Li, Jinchao and Peng, Baolin and Lee, Sungjin and Gao, Jianfeng and Takanobu, Ryuichi and Zhu, Qi and Minlie Huang and Schulz, Hannes and Atkinson, Adam and Adada, Mahmoud}, title = {Results of the Multi-Domain Task-Completion Dialog Challenge}, booktitle = {Proceedings of the 34th AAAI Conference on Artificial Intelligence, Eighth Dialog System Technology Challenge Workshop}, year = {2020}, month = {February}, url = {https://www.microsoft.com/en-us/research/publication/results-of-the-multi-domain-task-completion-dialog-challenge/}, } ``` ### Licensing Information [Microsoft Research Data License Agreement](https://msropendata-web-api.azurewebsites.net/licenses/2f933be3-284d-500b-7ea3-2aa2fd0f1bb2/view)
5,569
[ [ -0.04327392578125, -0.0562744140625, 0.0200042724609375, 0.0004565715789794922, 0.018463134765625, 0.02008056640625, -0.00785064697265625, -0.0195465087890625, 0.01263427734375, 0.046295166015625, -0.0745849609375, -0.046356201171875, -0.04266357421875, 0.02...
ConvLab/dailydialog
2022-11-25T09:06:49.000Z
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-nc-sa-4.0", "arxiv:1710.03957", "region:us" ]
ConvLab
null
null
0
3
2022-06-28T02:07:17
--- language: - en license: - cc-by-nc-sa-4.0 multilinguality: - monolingual pretty_name: DailyDialog size_categories: - 10K<n<100K task_categories: - conversational --- # Dataset Card for DailyDialog - **Repository:** http://yanran.li/dailydialog - **Paper:** https://arxiv.org/pdf/1710.03957.pdf - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('dailydialog') ontology = load_ontology('dailydialog') database = load_database('dailydialog') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary DailyDialog is a high-quality multi-turn dialog dataset. It is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. - **How to get the transformed data from original data:** - Download [ijcnlp_dailydialog.zip](http://yanran.li/files/ijcnlp_dailydialog.zip). - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - Use `topic` annotation as `domain`. If duplicated dialogs are annotated with different topics, use the most frequent one. - Use `intent` annotation as `binary` dialogue act. - Retain emotion annotation in the `emotion` field of each turn. - Use nltk to remove space before punctuation: `utt = ' '.join([detokenizer.detokenize(word_tokenize(s)) for s in sent_tokenize(utt)])`. - Replace `" ’ "` with `"'"`: `utt = utt.replace(' ’ ', "'")`. - Add space after full-stop - **Annotations:** - intent, emotion ### Supported Tasks and Leaderboards NLU, NLG ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 11118 | 87170 | 7.84 | 11.22 | 1 | - | - | - | - | | validation | 1000 | 8069 | 8.07 | 11.16 | 1 | - | - | - | - | | test | 1000 | 7740 | 7.74 | 11.36 | 1 | - | - | - | - | | all | 13118 | 102979 | 7.85 | 11.22 | 1 | - | - | - | - | 10 domains: ['Ordinary Life', 'School Life', 'Culture & Education', 'Attitude & Emotion', 'Relationship', 'Tourism', 'Health', 'Work', 'Politics', 'Finance'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @InProceedings{li2017dailydialog, author = {Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi}, title = {DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset}, booktitle = {Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)}, year = {2017} } ``` ### Licensing Information [**CC BY-NC-SA 4.0**](https://creativecommons.org/licenses/by-nc-sa/4.0/)
4,159
[ [ -0.03204345703125, -0.060699462890625, 0.0146636962890625, 0.0223541259765625, -0.0031185150146484375, -0.0023021697998046875, -0.01129150390625, -0.0228118896484375, 0.0218353271484375, 0.048553466796875, -0.072509765625, -0.055206298828125, -0.038848876953125,...
ctu-aic/enfever_nli
2022-06-29T13:05:10.000Z
[ "region:us" ]
ctu-aic
EnfeverNLI is a NLI version of the fever dataset
todo
1
3
2022-06-29T12:55:00
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
launch/ampere
2022-11-09T01:57:52.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
launch
null
null
0
3
2022-07-01T02:29:23
--- annotations_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual task_categories: - text-classification task_ids: [] pretty_name: AMPERE --- # Dataset Card for AMPERE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Structure](#dataset-structure) - [Dataset Creation](#dataset-creation) ## Dataset Description This dataset is released together with our NAACL 2019 Paper "[`Argument Mining for Understanding Peer Reviews`](https://aclanthology.org/N19-1219/)". If you find our work useful, please cite: ``` @inproceedings{hua-etal-2019-argument, title = "Argument Mining for Understanding Peer Reviews", author = "Hua, Xinyu and Nikolov, Mitko and Badugu, Nikhil and Wang, Lu", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", month = jun, year = "2019", address = "Minneapolis, Minnesota", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N19-1219", doi = "10.18653/v1/N19-1219", pages = "2131--2137", } ``` This dataset includes 400 scientific peer reviews collected from ICLR 2018 hosted at the Openreview platform. Each review is segmented into multiple propositions. We include the original untokenized text for each proposition. Each proposition is labeled as one of the following types: - **evaluation**: a proposition that is not objectively verifiable and does not require any action to be performed, such as qualitative judgement and interpretation of the paper, e.g. "The paper shows nice results on a number of small tasks." - **request**: a proposition that is not objectively verifiable and suggests a course of action to be taken, such as recommendation and suggestion for new experiments, e.g. "I would really like to see how the method performs without this hack." - **fact**: a proposition that is verifiable with objective evidence, such as mathematical conclusion and common knowledge of the field, e.g. "This work proposes a dynamic weight update scheme." - **quote**: a quote from the paper or another source, e.g. "The author wrote 'where r is lower bound of feature norm'." - **reference**: a proposition that refers to an objective evidence, such as URL link and citation, e.g. "see MuseGAN (Dong et al), MidiNet (Yang et al), etc." - **non-arg**: a non-argumentative discourse unit that does not contribute to the overall agenda of the review, such as greetings, metadata, and clarification questions, e.g. "Aha, now I understand." ## Dataset Structure The dataset is partitioned into train/val/test sets. Each set is uploaded as a jsonl format. Each line contains the following elements: - `doc_id` (str): a unique id for review document - `text` (list[str]): a list of segmented propositions - `labels` (list[str]): a list of labels corresponding to the propositions An example looks as follows. ``` { "doc_id": "H1WORsdlG", "text": [ "This paper addresses the important problem of understanding mathematically how GANs work.", "The approach taken here is to look at GAN through the lense of the scattering transform.", "Unfortunately the manuscrit submitted is very poorly written.", "Introduction and flow of thoughts is really hard to follow.", "In method sections, the text jumps from one concept to the next without proper definitions.", "Sorry I stopped reading on page 3.", "I suggest to rewrite this work before sending it to review.", "Among many things: - For citations use citep and not citet to have () at the right places.", "- Why does it seems -> Why does it seem etc.", ], "labels": [ 'fact', 'fact', 'evaluation', 'evaluation', 'evaluation', 'evaluation', 'request', 'request', 'request', ] } ``` ## Dataset Creation For human annotators, they will be asked to first read the above definitions and controversial cases carefully. The dataset to be annotated consists of 400 reviews partitioned in 20 batches. Each annotator will follow the following steps for annotation: - Step 1: Open a review file with a text editor. The unannotated review file contains only one line, please separate it into multiple lines with each line corresponding to one single proposition. Repeat the above actions on all 400 reviews. - Step 2: Based on the segmented units, label the type for each proposition. Start labeling at the end of each file with the marker "## Labels:". Indicate the line number of the proposition first, then annotate the type, e.g. "1. evaluation" for the first proposition. Repeat the above actions on all 400 reviews. A third annotator then resolves the disagreements between the two annotators on both segmentation and proposition type.
5,000
[ [ -0.045745849609375, -0.046661376953125, 0.0285797119140625, 0.013885498046875, -0.0264892578125, -0.006565093994140625, -0.01074981689453125, -0.0238494873046875, 0.031402587890625, 0.04046630859375, -0.025848388671875, -0.04534912109375, -0.035675048828125, ...
Tritkoman/autotrain-data-Rusynpannonianpure
2022-10-25T10:39:40.000Z
[ "task_categories:translation", "language:en", "language:es", "region:us" ]
Tritkoman
null
null
0
3
2022-07-01T05:20:02
--- language: - en - es task_categories: - translation --- # AutoTrain Dataset for project: Rusynpannonianpure ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project Rusynpannonianpure. ### Languages The BCP-47 code for the dataset's language is en2es. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "source": "\"I came to the region to meet with the leaders of the parties and discuss the progress in normalizin[...]", "target": "\"\u042f \u043f\u0440\u0438\u0448\u043e\u043b \u0434\u043e \u0440\u0435\u0491\u0438\u043e\u043d\u0443 \u043f\u0440\u0438\u0440\u0438\u0445\u0442\u0430\u0446 \u0448\u043b\u0457\u0434\u0443\u044e\u0446\u0438 \u0441\u0445\u043e\u0434 \u043b\u0438\u0434\u0435\u0440\u043e\u0445 \u0438 \u0431\u0435\u0448\u0435\u0434\u043e\u0432\u0430\u0446 \u043e \u043d\u0430\u043f\u0440\u0435\u0434\u043e\u0432\u0430\u043d\u044e \u0443 \u043d\u043e\u0440\u043c\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u0457 \u043e\u0434\u043d\u043e\u0448\u0435[...]" }, { "source": "\"We had a very good discussion yesterday evening about the situation and it is normal to look for a [...]", "target": "\"\u041c\u0430\u043b\u0438 \u0437\u043c\u0435 \u0454\u0434\u043d\u0443 \u043e\u0437\u0431\u0438\u043b\u044c\u043d\u0443 \u0440\u043e\u0437\u0433\u0432\u0430\u0440\u043a\u0443 \u0432\u0447\u0435\u0440\u0430 \u0432\u0435\u0447\u0430\u0440 \u043e \u0441\u0438\u0442\u0443\u0430\u0446\u0438\u0457 \u0438 \u043d\u043e\u0440\u043c\u0430\u043b\u043d\u043e \u0436\u0435 \u043f\u043e\u0442\u0440\u0435\u0431\u043d\u0435 \u0433\u043b\u0454\u0434\u0430\u0446 \u0440\u0438\u0448\u0435\u043d\u0454 \u043f\u0440\u0435\u0437 \u0434[...]" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "source": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 3 | | valid | 1 |
2,174
[ [ -0.029937744140625, -0.0225372314453125, 0.0265655517578125, 0.009796142578125, -0.01800537109375, -0.00482940673828125, -0.006244659423828125, -0.015869140625, 0.0250701904296875, 0.02520751953125, -0.046112060546875, -0.054962158203125, -0.051239013671875, ...
joelniklaus/covid19_emergency_event
2022-09-22T13:44:15.000Z
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:found", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:fr", "language:hu"...
joelniklaus
null
null
0
3
2022-07-01T11:26:15
--- annotations_creators: - found - other language_creators: - found language: - en - fr - hu - it - nb - nl - pl license: - cc0-1.0 multilinguality: - multilingual pretty_name: EXCEPTIUS Corpus size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - multi-label-classification --- # Dataset Card for EXCEPTIUS Corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://exceptius.com/ - **Repository:** https://github.com/tommasoc80/COVID19_emergency_event - **Paper:** Tziafas, G., de Saint-Phalle, E., de Vries, W., Egger, C., & Caselli, T. (2021). A Multilingual Approach to Identify and Classify Exceptional Measures against {COVID}-19. Proceedings of the Natural Legal Language Processing Workshop 2021, 46–62. https://doi.org/10.18653/v1/2021.nllp-1.5 - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch) ### Dataset Summary This dataset presents a new corpus of legislative documents from 8 European countries (Beglium, France, Hunary, Italy, Netherlands, Norway, Poland, UK) in 7 languages (Dutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish) manually annotated for exceptional measures against COVID-19. The annotation was done on the sentence level. ### Supported Tasks and Leaderboards The dataset can be used for multi-label text classification tasks. ### Languages Dutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish ## Dataset Structure ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). ### Data Fields The jsonl files have the following basic columns: - `language`: The language of the sentence (set based on the country) - `country`: The country of the sentence - `text`: Sentence that has been annotated The documents have been annotated with 8 labels, each label representing a specific measurement against COVID-19. Each label is represented by one boolean field in the jsonl file. The labels, i.e. the specific measure classes, are: - `event1`: State of Emergency - `event2`: Restrictions of fundamental rights and civil liberties - `event3`: Restrictions of daily liberties - `event4`: Closures / lockdown - `event5`: Suspension of international cooperation and commitments - `event6`: Police mobilization - `event7`: Army mobilization - `event8`: Government oversight - `all_events`: an aggregate column containing all applicable events combined ### Data Splits All annotated sentences combined have the following split: - train: 3312 (80%) - dev: 418 (10%) - test: 418 (10%) The splits have been performed on each country and have later been merged. Therefore, each split contains sentences from each country. The following label distribution shows the number of occurrences per label per split. `total occurrences` sums up the previous rows (total number of events per split). `split size` is the number of sentences per split. | Event | train | validation | test | |:----------------------|----------:|-----------:|----------:| | event1 | 383 | 54 | 47 | | event2 | 253 | 39 | 42 | | event3 | 412 | 70 | 62 | | event4 | 617 | 75 | 93 | | event5 | 52 | 4 | 6 | | event6 | 15 | 2 | 1 | | event7 | 45 | 4 | 5 | | event8 | 146 | 21 | 19 | | **total occurrences** | **1923** | **269** | **275** | | **split size** | **3312** | **418** | **418** | ## Dataset Creation ### Curation Rationale *"Investigate the potential of multilingual pretrained language models in order to facilitate the analysis, exploration, and comparison of legal texts on COVID-19 exceptional measures"* (Tziafas et al., 2021) ### Source Data #### Initial Data Collection and Normalization *“The corpus collection process has been overseen by four political science experts working in partnership with national legal experts. All documents were retrieved from official governmental websites that publish legal acts. The identification of the relevant documents has been done by means of 4 keywords (i.e., “COVID”, “COVID-19”, “Coronavirus” and “Health emergency”). For each language, the corresponding language specific keywords were used. In this initial phase, we focus on a sample of 19 EEA countries on measures adopted at the national level. To do so, we identify publicly available links to relevant documents 2 plus UK and Switzerland. We could not find corresponding documents for two countries of the EEA (i.e., Bulgaria and Greece). All documents have been collected either by manually downloading them or by automatic scraping. For countries with more than one official language (e.g., Switzerland), legal acts were collected in all available languages.”*(Tziafas et al., 2021) #### Who are the source language producers? Politicians and legal experts have been involved in producing the language material. ### Annotations #### Annotation process *"A subset of 281 documents in eight languages has been selected for manual annotation. The annotation of the exceptional measures applies at sentence-level. The sample is based on the French, Polish, Dutch, English, Hungarian, Belgian, Italian, and Norwegian sub-corpora. Annotators were allowed to assign as many subclasses as they consider relevant to each sentence, but with a total of eight main classes of exceptional measures. Sentences can potentially entail multiple exceptional classes, making this a multi-label annotation task. The annotation process results in eight binary annotations per sentence, with 0 if the specific class is not identified within the sentence and 1 if it is. The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board. Since the annotators are not fluent in all languages and due to the impossibility of recruiting expert native speakers, some documents need to be translated into English to be manually annotated. No inter-annotator agreement study has been conducted in this initial phase. We intend to remedy this limitation in the project’s next development cycle. However, during the annotation phase, annotators met on a weekly basis to discuss ambiguous cases and the guidelines. Annotators are encouraged to propose new classes or subclasses. For a new (sub)class to be accepted, the measure should have been independently identified by the majority of the annotators. In this phase, no new classes were proposed."* (Tziafas et al., 2021) #### Who are the annotators? *"The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board."* (Tziafas et al., 2021) ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch); [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch); [Github](https://github.com/kapllan)). ### Licensing Information Creative Commons Zero v1.0 Universal ### Citation Information ``` @inproceedings{tziafas-etal-2021-multilingual, title = "A Multilingual Approach to Identify and Classify Exceptional Measures against {COVID}-19", author = "Tziafas, Georgios and de Saint-Phalle, Eugenie and de Vries, Wietse and Egger, Clara and Caselli, Tommaso", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.nllp-1.5", pages = "46--62", } ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
10,501
[ [ -0.0318603515625, -0.043609619140625, 0.0185699462890625, 0.0174560546875, -0.0168304443359375, -0.00046253204345703125, -0.0318603515625, -0.0286407470703125, 0.037506103515625, 0.03466796875, -0.019500732421875, -0.06536865234375, -0.05718994140625, 0.0266...
FIdo-AI/ua-squad
2022-07-09T20:55:51.000Z
[ "region:us" ]
FIdo-AI
null
null
3
3
2022-07-03T15:28:24
annotations_creators: - crowdsourced language: - uk language_creators: - crowdsourced license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: squad pretty_name: '' size_categories: - 100K<n<1M source_datasets: - extended|squad_v2 task_categories: - question-answering task_ids: - open-domain-qa - extractive-qa train-eval-index: - col_mapping: answers: answer_start: answer_start text: text context: context question: question config: squad_v2 metrics: - name: SQuAD v2 type: squad_v2 splits: eval_split: validation train_split: train task: question-answering task_id: extractive_question_answering # Dataset Card for ua-squad ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/fido-ai/ua-datasets - **Repository:** https://huggingface.co/datasets/FIdo-AI/ua-squad - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Ukrainian translation of the Stanford Question Answering Dataset (SQuAD) 2.0 ### Supported Tasks and Leaderboards question-answering ### Languages Ukrainian ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
3,117
[ [ -0.038726806640625, -0.04473876953125, 0.01335906982421875, 0.013946533203125, -0.0019588470458984375, 0.0160675048828125, -0.01528167724609375, -0.03350830078125, 0.036376953125, 0.037750244140625, -0.07672119140625, -0.05303955078125, -0.0404052734375, 0.0...
lyakaap/laion-mini-ja
2022-07-05T02:30:45.000Z
[ "region:us" ]
lyakaap
null
null
1
3
2022-07-04T23:18:55
#samples=5007831 ``` dataset = load_dataset('lyakaap/laion2B-japanese-subset', split='train') dataset = dataset.remove_columns(['LANGUAGE', 'NSFW', 'LICENSE', 'SAMPLE_ID']) dataset = dataset.filter(lambda x: x['HEIGHT'] <= 384 and x['WIDTH'] <= 384) dataset = dataset.filter(lambda x: x['HEIGHT'] >= 128 and x['WIDTH'] >= 128) dataset = dataset.filter(lambda x: x['similarity'] >= 0.31) dataset.push_to_hub('lyakaap/laion-mini-ja', token='XXX') ```
449
[ [ -0.037322998046875, -0.028778076171875, 0.0125274658203125, 0.01515960693359375, -0.032501220703125, -0.0080718994140625, -0.0117034912109375, -0.00856781005859375, 0.033599853515625, 0.044525146484375, -0.0572509765625, -0.04583740234375, -0.032958984375, 0...
Paul/hatecheck-polish
2022-07-05T10:26:41.000Z
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-4.0", "arxiv:2206.09917", "regi...
Paul
null
null
1
3
2022-07-05T10:24:24
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - pl license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Polish HateCheck size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - hate-speech-detection --- # Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** paul@rewire.online ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
3,489
[ [ -0.046661376953125, -0.05206298828125, -0.0040130615234375, 0.006702423095703125, -0.008392333984375, 0.00782012939453125, -0.0022068023681640625, -0.037109375, 0.0290679931640625, 0.023834228515625, -0.055206298828125, -0.056182861328125, -0.040863037109375, ...
Paul/hatecheck-mandarin
2022-07-05T10:32:33.000Z
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:zh", "license:cc-by-4.0", "arxiv:2206.09917", "regi...
Paul
null
null
1
3
2022-07-05T10:31:28
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - zh license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Mandarin HateCheck size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - hate-speech-detection --- # Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** paul@rewire.online ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
3,491
[ [ -0.046661376953125, -0.05206298828125, -0.0040130615234375, 0.006702423095703125, -0.008392333984375, 0.00782012939453125, -0.0022068023681640625, -0.037109375, 0.0290679931640625, 0.023834228515625, -0.055206298828125, -0.056182861328125, -0.040863037109375, ...
Paul/hatecheck-italian
2022-07-05T10:35:17.000Z
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:it", "license:cc-by-4.0", "arxiv:2206.09917", "regi...
Paul
null
null
1
3
2022-07-05T10:33:01
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - it license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Italian HateCheck size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - hate-speech-detection --- # Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** paul@rewire.online ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
3,490
[ [ -0.046661376953125, -0.05206298828125, -0.0040130615234375, 0.006702423095703125, -0.008392333984375, 0.00782012939453125, -0.0022068023681640625, -0.037109375, 0.0290679931640625, 0.023834228515625, -0.055206298828125, -0.056182861328125, -0.040863037109375, ...
Paul/hatecheck-hindi
2022-07-05T10:36:37.000Z
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:hi", "license:cc-by-4.0", "arxiv:2206.09917", "regi...
Paul
null
null
0
3
2022-07-05T10:35:40
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - hi license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Hindi HateCheck size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - hate-speech-detection --- # Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** paul@rewire.online ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
3,488
[ [ -0.046661376953125, -0.05206298828125, -0.0040130615234375, 0.006702423095703125, -0.008392333984375, 0.00782012939453125, -0.0022068023681640625, -0.037109375, 0.0290679931640625, 0.023834228515625, -0.055206298828125, -0.056182861328125, -0.040863037109375, ...
CShorten/Last-Week-on-ML-ArXiv
2022-07-12T21:03:47.000Z
[ "region:us" ]
CShorten
null
null
0
3
2022-07-07T12:01:47
Please check here to see when the dataset was last updated. <br /> <h1> Last Updated July 12th, 2022 </h1>
106
[ [ -0.015380859375, -0.0300750732421875, 0.0252227783203125, 0.039764404296875, -0.01486968994140625, -0.0123443603515625, 0.0081634521484375, -0.029052734375, 0.018280029296875, 0.04901123046875, -0.06787109375, -0.06781005859375, -0.027587890625, -0.003784179...
biglam/atypical_animacy
2022-07-22T17:29:12.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:intent-classification", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:c...
biglam
Atypical animacy detection dataset, based on nineteenth-century sentences in English extracted from an open dataset of nineteenth-century books digitized by the British Library (available via https://doi.org/10.21250/db14, British Library Labs, 2014). This dataset contains 598 sentences containing mentions of machines. Each sentence has been annotated according to the animacy and humanness of the machine in the sentence.
@article{DBLP:journals/corr/abs-2005-11140, author = {Mariona Coll Ardanuy and Federico Nanni and Kaspar Beelen and Kasra Hosseini and Ruth Ahnert and Jon Lawrence and Katherine McDonough and Giorgia Tolfo and Daniel C. S. Wilson and Barbara McGillivray}, title = {Living Machines: {A} study of atypical animacy}, journal = {CoRR}, volume = {abs/2005.11140}, year = {2020}, url = {https://arxiv.org/abs/2005.11140}, eprinttype = {arXiv}, eprint = {2005.11140}, timestamp = {Sat, 23 Jan 2021 01:12:25 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-11140.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
3
3
2022-07-11T21:33:07
--- annotations_creators: - expert-generated language: - en language_creators: - machine-generated license: - cc0-1.0 multilinguality: - monolingual paperswithcode_id: null pretty_name: Atypical Animacy size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification - intent-classification --- # Dataset Card for atypical_animacy ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://bl.iro.bl.uk/concern/datasets/323177af-6081-4e93-8aaf-7932ca4a390a?locale=en - **Repository:** https://github.com/Living-with-machines/AtypicalAnimacy - **Paper:** https://arxiv.org/abs/2005.11140 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Mariona Coll Ardanuy](mailto:mcollardanuy@turing.ac.uk), [Daniel CS Wilson](mailto:dwilson@turing.ac.uk) ### Dataset Summary Atypical animacy detection dataset, based on nineteenth-century sentences in English extracted from an open dataset of nineteenth-century books digitized by the British Library. This dataset contains 598 sentences containing mentions of machines. Each sentence has been annotated according to the animacy and humanness of the machine in the sentence. ### Supported Tasks and Leaderboards - `text-classification` - This dataset can be used to determine if a mention of an entity in a document was humanlike or not - `entity-recognition` - The dataset can be used to fine tune large models for NER, albeit for a very specific use case ### Languages The text in the dataset is in English, as written by authors of books digitized by the British Library. The associated BCP-47 code in `en` ## Dataset Structure The dataset has a single configuration ### Data Instances An example data point ``` {'id': '002757962_01_184_16', 'sentence': '100 shows a Cornish boiler improperly seated with one small side flue and a bottom flue.', 'context': 'Fig. 100 shows a Cornish boiler improperly seated with one small side flue and a bottom flue. The effect of this on a long boiler is to cause springing and leakage of the seams from the heat being applied to one side of the boiler only.', 'target': 'boiler', 'animacy': 0.0, 'humanness': 1.0, 'offsets': [20, 26], 'date': '1893'} ``` ### Data Fields - id: sentence identifier according to internal Living with Machines BL books indexing. - sentence: sentence where target expression occurs. - context: sentence where target expression occurs, plus one sentence to the left and one sentence to the right. - target: target expression - animacy: animacy of the target expression - humanness: humanness of the target expression ### Data Splits Train | 598 ## Dataset Creation The dataset was created by manually annotating books that had been digitized by the British Library. According to the paper's authors, > "we provide a basis for examining how machines were imagined during the nineteenth century as everything from lifeless mechanical objects to living beings, or even human-like agents that feel, think, and love. We focus on texts from nineteenth-century Britain, a society being transformed by industrialization, as a good candidate for studying the broader issue" ### Curation Rationale From the paper: > The Stories dataset is largely composed of target expressions that correspond to either typically animate or typically inanimate entities. Even though some cases of unconventional animacy can be found(folktales, in particular, are richer in typically inanimate entities that become animate), these accountfor a very small proportion of the data.6 We decided to create our own dataset (henceforth 19thC Machines dataset) to gain a better sense of the suitability of our method to the problem of atypical animacy detection, with particular attention to the case of animacy of machines in nineteenth-century texts. ### Source Data #### Initial Data Collection and Normalization The dataset was generated by manually annotating books that have been digitized by the British Library #### Who are the source language producers? The data was originally produced by British authors in the 19th century. The books were then digitzed whcih produces some noise due to the OCR method. The annotators are from The Alan Turing Institute, The British Library, University of Cambridge, University of Exeter and Queen Mary University of London ### Annotations #### Annotation process Annotation was carried out in two parts. For the intial annotation process, from the paper: > "For human annotators, even history and literature experts, language subtleties made this task extremely subjective. In the first task, we masked the target word (i.e. the machine) in each sentence and asked the annotator to fill the slot with the most likely entity between ‘human’, ‘horse’, and ‘machine’, representing three levels in the animacy hierarchy: human, animal, and object (Comrie, 1989, 185). We asked annotators to stick to the most literal meaning and avoid metaphorical interpretations when possible. The second task was more straightforwardly related to determining the animacy of the target entity, given the same 100 sentences. We asked annotators to provide a score between -2 and 2, with -2 being definitely inanimate, -1 possibly inanimate, 1 possibly animate, and 2 definitely animate. Neutral judgements were not allowed. " For the final annotations, from the paper: > A subgroup of five annotators collaboratively wrote the guidelines based on their experience annotating the first batch of sentences, taking into account common discrepancies. After discussion, it was decided that a machine would be tagged as animate if it is described as having traits distinctive of biologically animate beings or human-specific skills, or portrayed as having feelings, emotions, or a soul. Sentences like the ones in example 2 would be considered animate, but an additional annotation layer would be provided to capture the notion of humanness, which would be true if the machine is portrayed as sentient and capable of specifically human emotions, and false if it used to suggest some degree of dehumanization. #### Who are the annotators? Annotations were carried out by the following people - Giorgia Tolfo - Ruth Ahnert - Kaspar Beelen - Mariona Coll Ardanuy - Jon Lawrence - Katherine McDonough - Federico Nanni - Daniel CS Wilson ### Personal and Sensitive Information This dataset does not have any personal information since they are digitizations of books from the 19th century. Some passages might be sensitive, but it is not explicitly mentioned in the paper. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The curators for this dataset are: - Kaspar Beelen - Mariona Coll Ardanuy - Federico Nanni - Giorgia Tolfo ### Licensing Information CC0 1.0 Universal Public Domain ### Citation Information ``` @article{DBLP:journals/corr/abs-2005-11140, author = {Mariona Coll Ardanuy and Federico Nanni and Kaspar Beelen and Kasra Hosseini and Ruth Ahnert and Jon Lawrence and Katherine McDonough and Giorgia Tolfo and Daniel C. S. Wilson and Barbara McGillivray}, title = {Living Machines: {A} study of atypical animacy}, journal = {CoRR}, volume = {abs/2005.11140}, year = {2020}, url = {https://arxiv.org/abs/2005.11140}, eprinttype = {arXiv}, eprint = {2005.11140}, timestamp = {Sat, 23 Jan 2021 01:12:25 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-11140.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
8,863
[ [ -0.0266876220703125, -0.043182373046875, 0.0195770263671875, -0.0107574462890625, -0.0216522216796875, 0.0026073455810546875, -0.00341033935546875, -0.045379638671875, 0.050048828125, 0.04278564453125, -0.050933837890625, -0.036834716796875, -0.043975830078125, ...
arbml/ashaar
2022-09-03T18:05:56.000Z
[ "region:us" ]
arbml
null
null
0
3
2022-07-12T14:42:57
# ashaar introducing ashaar, the largest dataset for arabic poetry # general statistics | metric | value | |-----------------|-----------| | number of poems | 254,630 | | number of baits | 3,857,429 | | number of poets | 7,167 | # License This dataset is released under fair use for research development only. Poets have the sole right to take down any access to their work. The authors of the websites, also, have the right to take down any material that does not conform with that. This work should not be used for any commercial purposes.
570
[ [ -0.022247314453125, 0.0092926025390625, -0.00316619873046875, 0.027313232421875, -0.039215087890625, -0.007236480712890625, -0.0063323974609375, -0.033905029296875, 0.021697998046875, 0.03350830078125, -0.007354736328125, -0.0640869140625, -0.0740966796875, ...
ilmariky/WikiQA-100-fi
2022-10-25T15:47:21.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "multilinguality:monolingual", "size_categories:n<1k", "language:fi", "license:gpl-3.0", "question-generation", "region:us" ]
ilmariky
null
null
0
3
2022-07-12T18:51:02
--- language: - fi license: - gpl-3.0 multilinguality: - monolingual size_categories: - n<1k task_categories: - question-answering task_ids: - extractive-qa pretty_name: WikiQA-100-fi tags: - question-generation train-eval-index: - config: plain_text task: question-answering task_id: extractive_question_answering splits: train_split: train eval_split: validation col_mapping: question: question context: context answers: text: text answer_start: answer_start --- # Dataset Card for "WikiQA-100-fi" ### Dataset Summary WikiQA-100-fi dataset contains 100 questions related to Finnish Wikipedia articles. The dataset is in the [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, and there are 10 questions for each category identified by the authors of SQuAD. Unlike SQuAD2.0, WikiQA-100-fi contains only answerable questions. The dataset is tiny compared to actual QA test sets, but it still gives an impression of the models' performance on purely native text data collected by a native speaker. The dataset was originally created as an evaluation set for models that had been mostly fine-tuned with automatically translated QA data. More information about the dataset and models created with it can be found [here](https://helda.helsinki.fi/handle/10138/344973). ## Dataset Structure ### Data Instances Example data: ``` { "title": "Folksonomia", "paragraphs": [ { "qas": [ { "question": "Minkälaista sisältöä käyttäjät voivat luokitella folksonomian avulla?", "id": "6t4ufel624", "answers": [ { "text": "www-sivuja, valokuvia ja linkkejä", "answer_start": 155 } ], "is_impossible": false } ], "context": "Folksonomia (engl. folksonomy) on yhteisöllisesti tuotettu, avoin luokittelujärjestelmä, jonka avulla internet-käyttäjät voivat luokitella sisältöä, kuten www-sivuja, valokuvia ja linkkejä. Etymologisesti folksonomia on peräisin sanojen \"folk\" (suom. väki) ja \"taxonomy\" (suom. taksonomia) leikkimielisestä yhdistelmästä." } ] } ``` ### Data Fields #### plain_text - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | test| |----------|----:| |plain_text| 100| ### Citation Information ``` @MastersThesis{3241c198b3f147faacbc6d8b64ed9419, author = "Kylli{\"a}inen, {Ilmari}", title = "Neural Factoid Question Answering and Question Generation for Finnish", language = "en", address = "Helsinki, Finland", school = "University of Helsinki", year = "2022", month = "jun", day = "15", url = "https://helda.helsinki.fi/handle/10138/344973" } ```
3,067
[ [ -0.053466796875, -0.06463623046875, 0.0190582275390625, 0.000022530555725097656, -0.00901031494140625, -0.01195526123046875, -0.006557464599609375, -0.01319122314453125, 0.04168701171875, 0.0214385986328125, -0.0689697265625, -0.040740966796875, -0.0101318359375...
MariaIsabel/FR_NFR_Spanish_requirements_classification
2022-07-22T07:19:16.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:es", "license:cc-by-4.0", "region:us" ]
MariaIsabel
null
null
0
3
2022-07-15T12:01:21
--- annotations_creators: - other language: - es language_creators: - other license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Spanish requirements labeled in functional and non-functional classes. size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification --- ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Published version of dataset used for paper 'Towards an automatic requirements classification in a new Spanish dataset' ### Languages Spanish ## Dataset Structure ### Data Fields Project: Project's Identifier from which the requirements were obtained. Requirement: Description of the software requirement. Final label: Label of the requirement: F (functional requirement) and NF (non-functional requirement). ## Dataset Creation ### Initial Data Collection and Normalization This dataset was created from a collection of functional and non-functional requirements extracted from 13 final degree and 2 master’s projects carried out from the University of A Coruna. It consist in 300 functional and 89 non-funtcional requirements. ## Additional Information ### Citation Information https://doi.org/10.5281/zenodo.6556541
1,320
[ [ -0.029754638671875, -0.00934600830078125, -0.006439208984375, 0.0252532958984375, 0.006481170654296875, -0.0036773681640625, 0.00737762451171875, -0.034576416015625, 0.0250244140625, 0.04779052734375, -0.046722412109375, -0.062347412109375, -0.030426025390625, ...
rajistics/indian_food_images
2022-08-04T17:58:49.000Z
[ "task_categories:image-classification", "region:us" ]
rajistics
null
null
0
3
2022-07-15T14:40:09
--- task_categories: - image-classification --- Source of dataset: [Kaggle](https://www.kaggle.com/datasets/l33tc0d3r/indian-food-classification) This Dataset contains different images of food in 20 different classes. Some of the classes are of Indian food. All the images are extracted from google. Images per classes are little so Data augmentation and transfer learning will be best suited here. Classes of the model: "burger", "butter_naan", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa"
660
[ [ -0.0173797607421875, -0.045013427734375, -0.00616455078125, -0.00687408447265625, 0.01009368896484375, 0.0023937225341796875, 0.01070404052734375, -0.031707763671875, 0.001434326171875, 0.0287017822265625, -0.01322174072265625, -0.035858154296875, -0.064453125, ...
Maxmioti/GDRP-fines
2022-07-17T10:03:34.000Z
[ "license:other", "region:us" ]
Maxmioti
null
null
0
3
2022-07-17T09:57:46
--- license: other --- Opensource DataSet form a Kaggle competition https://www.kaggle.com/datasets/andreibuliga1/gdpr-fines-20182020-updated-23012021 GDPR-fines is a dataset with summary of GDPR cases from companies that were find between 2018 and 2021. You will find the summary plus the Articles violated in the cases (3 most importants + "Others" regrouping the rest of articles). Raw text and lemmatized text available plus multi-labels.
445
[ [ -0.031280517578125, -0.0628662109375, 0.031646728515625, 0.00899505615234375, -0.0095062255859375, -0.016998291015625, 0.00531768798828125, -0.037139892578125, 0.01369476318359375, 0.0687255859375, -0.05426025390625, -0.0310516357421875, -0.03485107421875, 0...
biglam/lancaster_newsbooks
2022-08-18T16:03:54.000Z
[ "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "newsbooks", "1654", "lancaster", "oxford text", "region:us" ]
biglam
This corpus consists of two collections of seventeenth-century English "newsbooks". Both were drawn from the Thomason Tracts collection, which is held at the British Library and available in graphical form via Early English Books Online (EEBO). The construction of these keyboarded versions were in both cases funded by the British Academy. The FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654). This was constructed for the project "Looking at text re-use in a corpus of seventeenth-century news reportage", funded by the British Academy, grant reference SG-33825. The SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook "Mercurius Fumigosus", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project "Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655", funded by the British Academy, grant reference LRG-35423. This is version 1.0 of the corpus, released April 2007; it supercedes earlier versions circulated informally. For more information about the corpus, see www.ling.lancs.ac.uk/newsbooks
@misc{20.500.12024/2531, title = {The Lancaster Newsbooks Corpus}, author = {Thomason, George, d. 1666}, url = {http://hdl.handle.net/20.500.12024/2531}, note = {Oxford Text Archive}, copyright = {Distributed by the University of Oxford under a Creative Commons Attribution-{NonCommercial}-{ShareAlike} 3.0 Unported License.}, year = {2005} }
1
3
2022-07-19T19:48:58
--- annotations_creators: - no-annotation paperswithcode_id: null language: - en language_creators: - expert-generated license: - cc-by-sa-3.0 multilinguality: - monolingual pretty_name: Lancaster Newsbooks size_categories: - n<1K source_datasets: - original tags: - newsbooks - '1654' - lancaster - oxford text task_categories: [] task_ids: [] --- # Dataset Card for lancaster_newsbooks ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/2531 - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** Tony McEnery ### Dataset Summary This corpus consists of two collections of seventeenth-century English "newsbooks". Both were drawn from the Thomason Tracts collection, which is held at the British Library and available in graphical form via Early English Books Online (EEBO). The construction of these keyboarded versions were in both cases funded by the British Academy. The FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654). This was constructed for the project "Looking at text re-use in a corpus of seventeenth-century news reportage", funded by the British Academy, grant reference SG-33825. The SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook "Mercurius Fumigosus", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project "Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655", funded by the British Academy, grant reference LRG-35423. This is version 1.0 of the corpus, released April 2007; it supercedes earlier versions circulated informally. For more information about the corpus, see www.ling.lancs.ac.uk/newsbooks ### Supported Tasks and Leaderboards `text-classification`: This dataset can be used to augent existing datasets to find stylistic differences between texts from different time periods ### Languages The language in this dataset is English from 1654. The associated BCP-47 code is `en:GB` ## Dataset Structure ### Data Instances ``` { 'id': 'PerfAcc170', 'text': "Another late fight in Scotland, betwixt Col. Morgan and the Highlanders; with the number that were slain and taken Prisoners. The removing of Lieut. Col. John Lilburn from the Tower of London. The readiness of our Fleet for new action, though Peace be agreed on with Holland and Denmark. The taking of several more Prizes at sea. An Order of the Commissioners for the Trial and Approbation of public Preachers. Several proceedings of His Highness the Lord Protector and his Council, and another Ordinance touching the adjourning of the Term. Together with variety of choice Intelligence from several Foreign parts. From Wednesday APRIL 5 TO Wednesday April 12. 1654. Many Addresses were made to his Highness the Lord Protector, in the name of the City and County of York, and other places, wherein they acknowledge the great blessing of God to this Nation, that they have so great, so good and able a Protector. This day the Sessions began in the Old Bailey, and one of those that committed the late Robbery on Black-Heath, being called to his Trial, he refused to plead; but more hereafter. This evening about 9 of the Clock, the Dutch Ambassadors signed and sealed the Ratification of the Articles of Peace so long spoken of; so did likewise the Commissioners appointed to treat with them by his Highness the Lord Protector. Paris April 11, 1654. The Cardinal de Retz being removed from Vincennes by the Marshal de la Mesteray, is now safe arrived at Nantes, and put into the Castle. The Court Emissaries give out that he is not to be long there, but in a few days to be set at liberty, only that his Majesty desireth satisfaction upon some certain points, although the main drift is to make him surrender his place of Archbishop of this City. The Commissioners of Languedoc cannot yet prevail in anything upon their Complaints, but are like the Commissioners of Catalonia, who hitherto have prevailed no further than to receive many fair words, but nothing effectual, the main work now in hand, is to find monies speedily for the setting forth of the Army, that they may be in the field as soon as may be, and to that end the Partisans are not wanting to find out new ways for exacting of monies, preferring large sums to be put into the King's Coffers, the difficulty lieth only in the effecting of it, by reason that the Country is in most places so exhausted of monies, that they are scarce able to live: The design for the King's Coronation is now on foot again, and if I am rightly informed, it will be done about the middle of May next, which being done, his Majesty shall go upon the borders and down to Picardy to forward his Army in their Action, so much the rather, by reason that the Prince of Conde, whom we hear was last week at Valenciennes, and then taking a view of his Army, is returned to Bruxels, there to confer with the Archduke Leopoldus for to obtain money and other necessaries for the march of his Army, that so they may fall to action as soon as the weather and season will give them leave, his Lady and son are still at Rocroy, where they are expecting some alteration to their present condition. The Earl of Harcourt hath not yet received any answer from the Court upon those proposals which he lately sent to the Court. We have news, that the Duke Francis hath at last accepted the command of his Brother the Duke of Lorrain's Army, and is expected there in a few days, which our Cardinal doth very well relish. The forces that were in the Country of Liege are now marching homewards, and are to be quartered in Lorrain. The great preparation for an Armado to go from Marseilles and Touloon, is much at a stand, only there are lately 5 men of War gone to Sea, and 3 more are to follow, but upon no design than to rob and plunder upon the sea, sparing scarce any they encounter, whether they be friends or foes. This day his Highness the Lord Protector and his Council, passed an Ordinance for adjourning of Easter Term, from and after the first Return thereof, called Quindena Pasch, until the first Return of Trinity Term, called Crastino Trinatatis. Dalkieth, April 3. Cap. Sherwin Commander of the Primrose, and Cap. Smith Commander of the Duchess, in their return from Orkney, took a Dutch vessel laden with French and Spanish Wines, linen Cloth, and other good commodities, bound for the West Indies; they sent her into Aberdeen. Some young Lairds and others purposing to glean a party of horse in Lothian, and repair to the enemy, are taken, and brought hither prisoners. Aberdeen, April 1. The Earl of Athol is come to Glencarn with about 700 horse and foot, Seaford and some new raised forces are daily expected to join with them. Glencarn with his whole force, consisting of 2000 horse and foot, is at Dingwel, two miles from Brahan, not undeserving the name of an Island, so that we hope to engage them there. In order whereunto Lieut. Col. Mitchell is marched towards Inverness with 9 companies of Foot, and Col. Morgan hath followed him with 5 troops of Col Rich his Regiment, and 4 troops of Dragoons; he intends to take Col. Tomlinson's Regiment, which is in his way, and to draw 5 companies of Foot out of Inverness. From Cows in the Isle of Wight, April 6. A private man of War hath, about two days since, taken and brought in hither two French vessels, one of which is laden with Salt, the other hath but little except ballast; Our Fleet is for the most part near St. Helens point and the rest as the Spits head, being in all near 100 sail, gallant ships, and bravely accommodated. One of our Frigates hath taken a Holland ship, and carried her to Portsmouth; she hath in her 8 Bales of Paper, and some small quantity of Indico. Many ships that were here, went away yesterday morning towards the Downs; and several Merchants' ships are at present here in this road, being detained by contrary winds; they expect some favourable Easterly gales, that so they may proceed on their intended voyages. Deal, April 7. A man of War of ours is this morning gone for Holland, to get the Ratification of the Peace made with them, and an Express from the Dutch Ambassador, touching the Agreement. Most part of the ships which remained in this Road, are gone up into the River of Thames; here is only some few left that are bound to the Southward. A Fleet consisting of about 40 or 50 sail of ships, great and small, passed by this place, which we suppose to be the Dunkirk fleet bound for London. Because many will not give credit to the Agreement of Peace between the Commonwealths of England and Holland, (though their Unbelief proceeds from several causes, some prejudicately fearing the worst, and others wishing and desiring rather than the Fountain of Blood may still be open) We can, and do assure you, That the Articles (as we said before) were signed and sealed by the Commissioners on both sides, on Wednesday night last, and within 14 days are to be signed and sealed by the Lord Protector, and the States of Holland, and then to publicly proclaimed and published, both in England and Holland in one day. The Agreement with Denmark is also taken in upon the Articles: And for satisfaction of the loss which our English Merchants sustained by that King's command, whose demands amount to about 150000l. it is referred to four Merchants, two whereof to be English, and the other two Dutch; which four Merchants shall have absolute power to determine those demands within the space of twenty days; the place where they are to sit, is Guildhall. As touching the business of Amboyna, it is referred to eight Commissioners, who have six months time to agree thereon, and in case they agree not, then Umpires are nominated to determine that business. Let those that delight themselves in blood, have blood to drink, for they are worthy. From Legorn, March 23. thus. This week in the sight of this City was a sore fight between two ships at Sea, the one Dutchman of War of 32 guns, and the other an English ship called the Expedition, who came from Zant with Currans; the fight lasted 6 hours, but night having parted them, both ships sunk; most of the men were saved, but nothing else, though the fight was near the shore. It is advertised from Cullen, That the Treaty between that Elector and the Spanish Commissioners, is brought to perfection, and signed, which is, That both French and Spanish shall have free passage through the Country of Liege, not committing any acts of hostility upon each other; and the Spaniards in point of satisfaction for the losses received from them and the Lorrainers, shall pay to the said Elector 200000 Rixdollars out of the Duke of Lorrain's estate, and for security of performance, the Lordship of Kerpen, and another in Gulick shall be put into his hands until full payment. From Poland thus. The General of the Cossacks hath delivered up three very considerable places to the Muscovite, and caused himself to be re baptized after the Muscovia manner, which is so ill resented by all sorts of people in that Country, that the Commanders sent to the King of Poland, That if he pleased to send them a general pardon for what they had done, and the rest of the Army, they will return with the major part of the Army into his Majesty's service; which hath so incensed the General, that having caused them to be apprehended he hath made each of them shorter by the head, which hath caused much heart burning among the people. Whereas many abuses and corruptions are crept into the ordinary course and administration of Justice, both in Law and Equity, the reformation whereof hath not yet been attained; Out of a tender care and desire that so necessary and good a work may at length be brought to effect, it is held convenient that so necessary and good a work may at length be brought to effect, it is held convenient that so necessary and good a work may at length be brought to effect, it is held convenient and necessary to adjourn part of the next Term of Easter; be if therefore Ordained by his Highness the Lord Protector, by and with the consent of his Council, That part of the said Term of Easter now next coming be adjourned, that is to say, from and after the first Return, called Quindena Pasch, unto the last Return of the said Easter Term, called Crastino Ascensionis; And all and every person or persons, which have cause, or commandment to appear in any of the Courts at Westminster, in or at any day or time, from and after the said Return, called Quindena Pasch, may tarry at their dwellings, or where their business shall lie, without resorting to any of the said Courts for that Cause, until the said last Return, called Crastino Ascensionis, without danger or forfeiture, penalty or contempt to be in that behalf. And be it also ordained by the Authority aforesaid, That Writs of Adjournment shall be directed to the Justices of the said Courts, and Barons of the Exchequer, giving them authority to adjourn the said part of the said Term of Easter, as aforesaid, that is to say, from and after the said first Return called Quindena Pasch, until the said last Return of the said Term, called Crastino Ascensionis, as before is said, and the said adjournment shall be made, as aforesaid. And be it further Ordained, That all Matters, Causes and Suits, depending in any of the said Courts, shall have continuance, and the parties shall have day, from the day of the said Adjournment, until the said Return of Crastino Ascensionis, as is aforesaid; and the Lord's Commissioners of the Great Seal are required to issue forth Writs accordingly. And be it further Ordained, That a former Ordinance of the sixth day of this instant April, for the Adjourning of part of the said Term, until the first Return of Trinity Term next, called Crastino Trinitatis, be from henceforth Repealed and void. And it is lastly Ordained by the Authority aforesaid, That the Sheriffs of London and Middlesex, and all other Sheriffs both in England and Wales, do forthwith proclaim and publish this Ordinance in the chief Market Towns and usual places within their several and respective Counties. Lieutenant Colonel John Lilburn being said to have again attempted something against the State, is removed from the Tower to be prisoner in some more remote place. The titular King of Scots is still at Paris, and of late something more merry than ordinary. The Deputies for Languedoc telling him, that if there were a Peace concluded with England, it would be well for all the Protestants in France; He made answer that he was glad of it, for it would then be the better for himself. This day was the Gaol delivery; three were hanged, one whereof died most desperately, and going up the Cart, drank a health to the Devil's Majesty: One was pressed last Saturday, and being afterwards heard to groan, was carried down to the Press-yard again to have the execution dispatched. The Commissioners for Approbation of public Ministers, sate at Whitehall, and divers Certificates were presented unto them in behalf of several particular persons, for approbation; and in regard that none hereafter should out of carelessness of partiality set their hands to a Certificate for any person that hereafter should out of carelessness or partiality let their hands to a Certificate for any person that hereafter may be found unworthy to be admitted, and so become prejudicial to the Church of Christ, and frustrate the intentions of our Governors which made this Ordinance; the said Commissioners do earnestly beseech all whom it may concern (in the bowels of Christ) as they tender the honour of the great God himself, whose servants we all are, the prejudice of the souls of his people purchased by the blood of his Son, the advancement and propagation of his Gospel, through all the parts of this Land and Nation, whereunto we belong, so to lend assistance both of their fervent prayers, and due informations, that thereby the work may be carried on more prosperously, and the Commissioners more encouraged to attend it. Signed in the name, and at the request of the Commissioners for Approbation of public Preachers. By Francis Rouse, Io. Arrowsmith. William Goss. Stephen Marshall. The last Letters from Edinburgh speak of another Engagement betwixt Col. Morgan, and the Enemy; but they tell us not the particulars, only they say, that the Enemy is once more dispersed, and driven further up into the mountains, with the loss of about 200 men. The peace with Holland being concluded (as you heard before) our Merchants are lading of goods on shipboard, as fast as Lighters can be gotten to carry them where the ships ride at anchor. We likewise hear of the like preparations in Holland for transporting of goods of several sorts hither. And now all the rest of Europe are at a stand, or at leastwise stand gazing upon us, and begin to cast about with themselves, what action may be great and considerable enough for to be undertaken next by those great Fleets, which are as ready for action as any opportunity can be to offer itself. How they will be disposed of Time will discover. London, Printed by E. Alsop 1654.", 'title': 'A Perfect Account, Issue 170'} ``` ### Data Fields ``` { "id": Unique identifier for that data point("string"), "text": Text in that datapoint("string"), "title": The title of the news article("string") } ``` ### Data Splits Train: 303 ## Dataset Creation ### Curation Rationale The FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654) and was constructed for the project "Looking at text re-use in a corpus of seventeenth-century news reportage", funded by the British Academy, grant reference SG-33825. The SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook "Mercurius Fumigosus", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project "Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655", funded by the British Academy, grant reference LRG-35423. ### Source Data #### Initial Data Collection and Normalization This corpus was created by the Department of Linguistics and English Language, Lancaster University. #### Who are the source language producers? The original data was humna-generated from existing newsbooks ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information None, since this dataset is from 1654 ## Considerations for Using the Data ### Social Impact of Dataset This dataset provides an insight into the news and social systems from 17th century England ### Discussion of Biases The dataset is from the 17th century and some articles might reflect social biases of the time in terms of sexuality, gender, race, etc. ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators This corpus was created by the Department of Linguistics and English Language, Lancaster University. Project leader: Tony McEnery Corpus editor: Andrew Hardie ### Licensing Information Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License ### Citation Information @misc{20.500.12024/2531, title = {The Lancaster Newsbooks Corpus}, author = {Thomason, George, d. 1666}, url = {http://hdl.handle.net/20.500.12024/2531}, note = {Oxford Text Archive}, copyright = {Distributed by the University of Oxford under a Creative Commons Attribution-{NonCommercial}-{ShareAlike} 3.0 Unported License.}, year = {2005} }
21,212
[ [ -0.043548583984375, -0.0282745361328125, 0.033660888671875, 0.02008056640625, -0.015106201171875, -0.0177154541015625, 0.0021686553955078125, -0.0511474609375, 0.0845947265625, 0.059539794921875, -0.0199127197265625, -0.0318603515625, -0.03997802734375, -0.0...
sadrasabouri/ShahNegar
2022-10-21T11:54:05.000Z
[ "task_categories:image-to-text", "task_categories:text-to-image", "task_ids:image-captioning", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "r...
sadrasabouri
null
null
4
3
2022-07-20T05:13:00
--- annotations_creators: - machine-generated language_creators: - expert-generated language: - en license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - image-to-text - text-to-image task_ids: - image-captioning pretty_name: ShahNegar --- # ShahNegar (A Plotted version of The Shahnameh) This dataset is a plotted version of Ferdowsi's Shahnameh (which is a highly-regarded ancient set of Farsi poems) generated using DALL-E mini (aka [craiyon](https://www.craiyon.com/)). You can use this dataset using the code below: ```python from datasets import load_dataset dataset = load_dataset("sadrasabouri/ShahNegar") ``` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Paper:** - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com) ### Dataset Summary This dataset contains more than 30K images with their corresponding text from the Shahnameh. For each Shahnameh paragraph, we generated at most 9 images. Images corresponding to the same paragraphs have the same `id` field. There was a human annotation post-process in which we removed some harmful/private generated images from the dataset. After all we reached to more than 30K, 256 * 256 images. ### Supported Tasks and Leaderboards The main purpose of making this dataset open source is because of its artistic value, but it can also be used for the below tasks: + text-to-image + image-to-text (image captioning) ### Languages The Shahnameh was generally written in Farsi (Persian) but the translated version we used for this dataset - [satoor](https://www.sattor.com/english/Shahnameh.pdf) - was completely in English with no alignments for the corresponding Farsi poem. We are planning to add another field to dataset entries which is the corresponding Farsi poem as soon as possible. ## Dataset Structure ### Data Fields Here is an instance of our dataset: ```json { "image": <PIL Image Bytes>, "id": 0, "text": "He took up his abode in the mountains, and clad himself and his people in tiger-skins, and from him sprang all kindly nurture and the arts of clothing, till then unknown." } ``` + `image`: the image for given text. + `id`: the id for the text (**Not for the image**). + `text`: the English text for the image. ### Data Splits This dataset has only a split (`train` split). ## Dataset Creation The translated version of the Shahnameh was generally derived from the [satoor](https://www.sattor.com/english/Shahnameh.pdf) website. We first extracted texts from the pdf. After that, we divided paragraphs into sentences and give each sentence to the DALL-E mini model through its online API. It generated nine images for each sentence. After a few annotations, we came up with more than 30000 images. ### Annotations #### Annotation process Through the process of image generation, we noticed a bias in the DALL-E models towards the word `iran`. It was biased so that each sentence with this given word would have pictures from Iran's political figures which were usually totally irrelevant. The annotation process mainly focused to deal with these pictures. We removed those images which seems to be harmful to those figures and/or were irrelevant to the context. #### Who are the annotators? Mahsa Namdar and Sadra Sabouri were the annotators of this dataset. ### Personal and Sensitive Information Since the textual data is easily downloadable and the images were generated through an image generation model there shouldn't be any personal information in this dataset. Just in case you find something harmful or violating of one's personal information please let us know. We will take proper action as soon as possible. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is mainly aimed to release for its artistic value. The process of generating images for the Shahnameh - which is one of the most important Farsi poem books - is our precious contribution. This dataset is not only used for this purpose but also can as a dataset in image-to-text and text-to-image tasks. ### Discussion of Biases The dataset's possible biases would come from the DALL-E mini biases. It's actually a good practice to check the dataset entries in order to find biases in that model. One it's worth mentioning in this work is the DALL-E mini model's bias for the word `iran` which nearly always comes up with images from political figures of this country. ### Other Known Limitations There are constant debates in the literature about the limitations of machine-generated datasets. Some believe that since nowadays models are not perfect - and so do their output, it wouldn't be a good idea to use these artificially generated datasets as input to the new model. They suggest that by doing so we are actually limiting our accuracy by the model's accuracy which provided the primary dataset. ## Additional Information ### Dataset Curators + Emad Fatemizadeh: The general idea for generating a graphical version of Farsi poems was firstly introduced by him. + Sadra Sabouri: He looked up a translated version of the Shahnameh, extract and tokenized poems from it, and used the online DALL-E mini API to generate images from poems. + Mahsa Namdar: The process of annotation as a post-process on data has been held by her. ### Licensing Information MIT ### Citation Information [More Information Needed] ### Contributions Thanks to [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset.
6,521
[ [ -0.0274505615234375, -0.044921875, 0.0256805419921875, 0.00716400146484375, -0.03558349609375, -0.01403045654296875, -0.0301971435546875, -0.03759765625, 0.00946044921875, 0.057708740234375, -0.04638671875, -0.06304931640625, -0.040313720703125, 0.0252990722...
tokeron/Piyyut
2023-04-08T10:36:57.000Z
[ "task_categories:text-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:heb", "license:afl-3.0", "metaphor-detection", "region:us" ]
tokeron
null
null
0
3
2022-07-20T09:01:23
--- license: afl-3.0 language: - heb multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification tags: - metaphor-detection viewer: true ---
223
[ [ -0.00897979736328125, -0.01299285888671875, 0.045623779296875, 0.03460693359375, -0.01352691650390625, 0.0164794921875, 0.0252227783203125, 0.00353240966796875, 0.040496826171875, 0.051788330078125, -0.045501708984375, -0.016632080078125, -0.049652099609375, ...
ydmeira/segment-pokemon
2022-07-23T10:28:38.000Z
[ "region:us" ]
ydmeira
null
null
1
3
2022-07-23T10:25:00
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
bigscience/evaluation-results
2023-05-28T00:13:53.000Z
[ "task_categories:other", "size_categories:100M<n<1B", "region:us" ]
bigscience
@misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} }
@article{scao2022bloom, title={BLOOM: A 176B-Parameter Open-Access Multilingual Language Model}, author={Scao, Teven Le and Fan, Angela and Akiki, Christopher and Pavlick, Ellie and Ili{\'c}, Suzana and Hesslow, Daniel and Castagn{\'e}, Roman and Luccioni, Alexandra Sasha and Yvon, Fran{\c{c}}ois and Gall{\'e}, Matthias and others}, journal={arXiv preprint arXiv:2211.05100}, year={2022} }
8
3
2022-08-01T18:35:58
--- pretty_name: evaluation-results size_categories: - 100M<n<1B task_categories: - other --- # BigScience BLOOM Evaluation Results This repository contains evaluation results & original predictions of BLOOM & friends. ## Usage You can load numeric results via: ```python from datasets import load_dataset ds = load_dataset("bigscience/evaluation-results", "bloom") ``` If it takes too long, it may be faster to clone the repository and load the data from disk: ```python !git clone https://huggingface.co/datasets/bigscience/evaluation-results ds = load_dataset("evaluation-results", "bloom") ``` For example generations (.jsonl files), you need to manually browse the repository. ## Structure For `bigsciencelmevalharness`, `lmevalharness` & `codeeval` evaluation_frameworks the structure is: `model_name > evaluation_framework > checkpoint_type > dataset_name > data` ## Evaluation Procedure - `bigsciencelmevalharness` files were created using the below: - https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/291 - https://github.com/bigscience-workshop/lm-evaluation-harness - `lmevalharness` files were created using the below: - https://github.com/bigscience-workshop/Megatron-DeepSpeed - https://github.com/EleutherAI/lm-evaluation-harness - `codeeval` files were created using the HumanEval code dataset with the below: - https://github.com/loubnabnl/bloom-code-evaluation
1,424
[ [ -0.025115966796875, -0.027740478515625, 0.049835205078125, 0.031463623046875, 0.0032329559326171875, 0.006626129150390625, -0.01053619384765625, -0.023040771484375, 0.0252227783203125, 0.02789306640625, -0.052520751953125, -0.037872314453125, -0.0310516357421875...
VanessaSchenkel/handmade-dataset
2022-08-06T22:11:34.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:n<1K", "source_datasets:original", "language:en", "language:pt", "license:afl-3.0", "region:us" ]
VanessaSchenkel
null
null
1
3
2022-08-06T22:02:15
--- annotations_creators: - found language: - en - pt language_creators: - found license: - afl-3.0 multilinguality: - translation pretty_name: VanessaSchenkel/handmade-dataset size_categories: - n<1K source_datasets: - original tags: [] task_categories: - translation task_ids: [] --- Dataset with sentences regarding professions, half of the translations are to feminine and half for masculine sentences. How to use it: ``` from datasets import load_dataset remote_dataset = load_dataset("VanessaSchenkel/handmade-dataset", field="data") remote_dataset ``` Output: ``` DatasetDict({ train: Dataset({ features: ['id', 'translation'], num_rows: 388 }) }) ``` Exemple: ``` remote_dataset["train"][5] ``` Output: ``` {'id': '5', 'translation': {'english': 'the postman finished her work .', 'portuguese': 'A carteira terminou seu trabalho .'}} ```
876
[ [ 0.005218505859375, -0.0345458984375, 0.0013303756713867188, 0.017059326171875, -0.0287933349609375, -0.024871826171875, -0.031036376953125, 0.0013532638549804688, 0.011749267578125, 0.053436279296875, -0.0714111328125, -0.0584716796875, -0.0435791015625, 0.0...
Chr0my/public_flickr_photos_license_1
2022-08-08T20:39:40.000Z
[ "license:cc-by-nc-sa-3.0", "region:us" ]
Chr0my
null
null
2
3
2022-08-08T20:27:28
--- license: cc-by-nc-sa-3.0 --- 119893266 photos from flickr (https://www.flickr.com/creativecommons/by-nc-sa-2.0/) --- all photos are under license id = 1 name=Attribution-NonCommercial-ShareAlike License url=https://creativecommons.org/licenses/by-nc-sa/2.0/
262
[ [ -0.045013427734375, 0.0017824172973632812, 0.05731201171875, 0.048828125, -0.05615234375, -0.024993896484375, 0.040283203125, -0.04376220703125, 0.00740814208984375, 0.02685546875, -0.033172607421875, -0.02099609375, -0.044830322265625, 0.0252685546875, ...
anandu/eurostat_demo
2022-08-13T19:38:55.000Z
[ "region:us" ]
anandu
null
null
0
3
2022-08-13T19:38:32
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.057220458984375, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.00507354736328125, 0.0513916015625, 0.0169830322265625, -0.052032470703125, -0.014984130859375, -0.060455322265625, 0.037...
Gabriel/pubmed_swe
2022-10-29T11:54:25.000Z
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:10K<n<100K", "source_datasets:https://github.com/huggingface/datasets/tree/master/datasets/pubmed", "language:sv", "license:other", "conditional-text-generation", "region:us" ]
Gabriel
null
null
0
3
2022-08-14T14:06:26
--- language: - sv license: - other size_categories: - 10K<n<100K source_datasets: - https://github.com/huggingface/datasets/tree/master/datasets/pubmed task_categories: - summarization - text2text-generation task_ids: [] tags: - conditional-text-generation --- # Dataset Card for Swedish pubmed Dataset The Swedish pubmed dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/pubmed ### Data Fields - `document`: a string containing the body of the paper - `summary`: a string containing the abstract of the paper ### Data Splits The Swedish pubmed dataset follows the same splits as the original English version and has 1 splits: _train_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 90,000 |
999
[ [ -0.0230865478515625, -0.006198883056640625, 0.024658203125, 0.00038743019104003906, -0.06207275390625, 0.0000546574592590332, -0.0014715194702148438, -0.0379638671875, 0.0333251953125, 0.04840087890625, -0.04425048828125, -0.057769775390625, -0.05181884765625, ...
hugginglearners/data-science-job-salaries
2022-08-17T18:42:40.000Z
[ "license:cc0-1.0", "region:us" ]
hugginglearners
null
null
2
3
2022-08-15T00:00:27
--- license: - cc0-1.0 kaggle_id: ruchi798/data-science-job-salaries --- # Dataset Card for Data Science Job Salaries ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/ruchi798/data-science-job-salaries - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Content | Column | Description | |--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | work_year | The year the salary was paid. | | experience_level | The experience level in the job during the year with the following possible values: EN Entry-level / Junior MI Mid-level / Intermediate SE Senior-level / Expert EX Executive-level / Director | | employment_type | The type of employement for the role: PT Part-time FT Full-time CT Contract FL Freelance | | job_title | The role worked in during the year. | | salary | The total gross salary amount paid. | | salary_currency | The currency of the salary paid as an ISO 4217 currency code. | | salary_in_usd | The salary in USD (FX rate divided by avg. USD rate for the respective year via fxdata.foorilla.com). | | employee_residence | Employee's primary country of residence in during the work year as an ISO 3166 country code. | | remote_ratio | The overall amount of work done remotely, possible values are as follows: 0 No remote work (less than 20%) 50 Partially remote 100 Fully remote (more than 80%) | | company_location | The country of the employer's main office or contracting branch as an ISO 3166 country code. | | company_size | The average number of people that worked for the company during the year: S less than 50 employees (small) M 50 to 250 employees (medium) L more than 250 employees (large) | ### Acknowledgements I'd like to thank ai-jobs.net Salaries for aggregating this data! ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@ruchi798](https://kaggle.com/ruchi798) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
5,488
[ [ -0.0096588134765625, -0.0121307373046875, 0.0087432861328125, 0.0113067626953125, -0.02215576171875, -0.005153656005859375, -0.002834320068359375, -0.04156494140625, 0.046661376953125, 0.033203125, -0.0601806640625, -0.054656982421875, -0.048126220703125, 0....
Hobson/surname-nationality
2022-12-29T23:14:09.000Z
[ "task_categories:token-classification", "task_categories:text-classification", "task_ids:named-entity-recognition", "size_categories:List[str]", "source_datasets:List[str]", "license:mit", "multilingual", "RNN", "name", "tagging", "nlp", "transliterated", "character-level", "text-tagging",...
Hobson
null
null
2
3
2022-08-15T03:52:58
--- license: mit size_categories: List[str] source_datasets: List[str] task_categories: - token-classification - text-classification task_ids: - named-entity-recognition pretty_name: Popular Surname Nationality Mapping tags: - multilingual - RNN - name - tagging - nlp - transliterated - character-level - text-tagging - bias - classification - language model - surname - ethnicity - multilabel classification - natural language --- # Popular Surname Nationality Mapping Sample of popular surnames for 30+ countries labeled with nationality (language)
555
[ [ -0.02264404296875, 0.001987457275390625, -0.00601959228515625, 0.047607421875, -0.0198822021484375, 0.01934814453125, -0.012939453125, -0.0046844482421875, 0.033355712890625, 0.065185546875, -0.03570556640625, -0.0797119140625, -0.052154541015625, 0.00005906...
Luka-Wang/COCO
2022-08-18T07:36:16.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
Luka-Wang
null
null
0
3
2022-08-17T13:40:37
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - mit multilinguality: - monolingual paperswithcode_id: acronym-identification pretty_name: Acronym Identification Dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - token-classification-other-acronym-identification train-eval-index: - col_mapping: labels: tags tokens: tokens config: default splits: eval_split: test task: token-classification task_id: entity_extraction --- # Dataset Card for [COCO] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-scuwyh2000](https://github.com/scuwyh2000) for adding this dataset.
3,029
[ [ -0.032318115234375, -0.0396728515625, 0.005992889404296875, 0.03289794921875, -0.01641845703125, 0.0252532958984375, -0.025115966796875, -0.034576416015625, 0.04827880859375, 0.04681396484375, -0.059722900390625, -0.08355712890625, -0.05029296875, 0.00732040...
acul3/KoPI-CC_News
2023-03-03T07:48:00.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "source_datasets:original", "language:id", "license:cc", "region:us" ]
acul3
null
null
1
3
2022-08-17T14:50:15
--- annotations_creators: - no-annotation language_creators: - found language: - id license: cc source_datasets: - original task_categories: - text-generation task_ids: - language-modeling paperswithcode_id: oscar --- ### Dataset Summary KoPI(Korpus Perayapan Indonesia)-CC_News is Indonesian Only Extract from CC NEWS Common Crawl from 2016-2022(july) ,each snapshots get extracted using warcio,trafilatura and filter using fasttext detail soon ```
459
[ [ -0.02020263671875, -0.032318115234375, 0.0221099853515625, 0.0163116455078125, -0.053314208984375, -0.01087188720703125, -0.00235748291015625, -0.019134521484375, 0.035125732421875, 0.06488037109375, -0.06640625, -0.0535888671875, -0.034149169921875, 0.01126...
BigBang/galaxyzoo-decals
2022-08-29T18:03:24.000Z
[ "license:cc-by-4.0", "region:us" ]
BigBang
null
null
0
3
2022-08-19T15:50:22
--- license: cc-by-4.0 --- # Galaxy Zoo DECaLS: Detailed Visual Morphology Measurements from Volunteers and Deep Learning for 314,000 Galaxies - https://github.com/mwalmsley/zoobot - https://zenodo.org/record/4573248 # Dataset Schema This schema describes the columns in the GZ DECaLS catalogues; `gz_decals_auto_posteriors`, `gz_decals_volunteers_1_and_2`, and `gz_decals_volunteers_5`. In all catalogues, galaxies are identified by their `iauname`. Galaxies are unique within a catalogue. `gz_decals_auto_posteriors` contains all galaxies with appropriate imaging and photometry in DECaLS DR5, while `gz_decals_volunteers_1_and_2`, and `gz_decals_volunteers_5` contain subsets classified by volunteers in the respective campaigns. The columns reporting morphology measurements are named like `{some-question}_{an-answer}`. For example, for the first question, both volunteer catalogues include the following: | Column | Description | | ----------- | ----------- | | smooth-or-featured_total | Total number of volunteers who answered the "Smooth of Featured" question | | smooth-or-featured_smooth | Count of volunteers who responded "Smooth" to the "Smooth or Featured" question | | smooth-or-featured_featured-or-disk | Count of volunteers who responded "Featured or Disk", similarly | | smooth-or-featured_artifact | Count of volunteers who responded "Artifact", similarly | | smooth-or-featured_smooth_fraction | Fraction of volunteers who responded "Smooth" to the "Smooth or Featured" question, out of all respondes (i.e. smooth count / total) | | smooth-or-featured_featured-or-disk_fraction | Fraction of volunteers who responded "Featured or Disk", similarly | | smooth-or-featured_artifact_fraction | Fraction of volunteers who responded "Artifact", similarly | The questions and answers are slightly different for `gz_decals_volunteers_1_and_2` than `gz_decals_volunteers_5`. See the paper for more. The volunteer catalogues include `{question}_{answer}_debiased` columns which attempt to estimate what the vote fractions would be if the same galaxy were imaged at lower redshift. See the paper for more. Note that the debiased measurements are highly uncertain on an individual galaxy basis and therefore should be used with caution. Debiased estimates are only available for galaxies with 0.02<z<0.15, -21.5>M_r>-23, and at least 30 votes for the first question (`Smooth or Featured') after volunteer weighting. The automated catalogue, `gz_decals_auto_posteriors`, includes predictions for all galaxies and all questions even when that question may not be appropriate (e.g. number of spiral arms for a smooth elliptical). To assess relevance, we include `{question}_proportion_volunteers_asked` columns showing the estimated fraction of volunteers that would have been asked each question (i.e. the product of the vote fractions for the preceding answers). We suggest a cut of `{question}_proportion_volunteers_asked` > 0.5 as a starting point. The automated catalogue does not include volunteer counts or totals (naturally). Each catalogue includes a pair of columns to warn where galaxies may have been classified using an inappropriately large field-of-view (due to incorrect radii measurements in the NSA, on which the field-of-view is calculated). We suggest excluding galaxies (<1%) with such warnings. | Column | Description | | ----------- | ----------- | | wrong_size_statistic | Mean distance from center of all pixels above double the 20th percentile (i.e. probable source pixels) | | wrong_size_warning | True if wrong_size_statistic > 161.0, our suggested starting cut. Approximately the mean distance of all pixels from center| For convenience, each catalogue includes the same set of basic astrophysical measurements copied from the NASA Sloan Atlas (NSA). Additional measurements can be added my crossmatching on `iauname` with the NSA. See [here](https://data.sdss.org/datamodel/files/ATLAS_DATA/ATLAS_MAJOR_VERSION/nsa.html) for the NSA schema. If you use these columns, you should cite the NSA. | Column | Description | | ----------- | ----------- | | ra | Right ascension (degrees) | | dec | Declination (degrees) | | iauname | Unique identifier listed in NSA v1.0.1 | | petro_theta | "Azimuthally-averaged SDSS-style Petrosian radius (derived from r band" | | petro_th50 | "Azimuthally-averaged SDSS-style 50% light radius (r-band)" | | petro_th90 | "Azimuthally-averaged SDSS-style 50% light radius (r-band)" | | elpetro_absmag_r | "Absolute magnitude from elliptical Petrosian fluxes in rest-frame" in SDSS r | | sersic_nmgy_r | "Galactic-extinction corrected AB flux" in SDSS r | | redshift | "Heliocentric redshift" ("z" column in NSA) | | mag_r | 22.5 - 2.5 log10(sersic_nmgy_r). *Not* the same as the NSA mag column! | ``` @dataset{walmsley_mike_2020_4573248, author = {Walmsley, Mike and Lintott, Chris and Tobias, Geron and Kruk, Sandor J and Krawczyk, Coleman and Willett, Kyle and Bamford, Steven and Kelvin, Lee S and Fortson, Lucy and Gal, Yarin and Keel, William and Masters, Karen and Mehta, Vihang and Simmons, Brooke and Smethurst, Rebecca J and Smith, Lewis and Baeten, Elisabeth M L and Macmillan, Christine}, title = {{Galaxy Zoo DECaLS: Detailed Visual Morphology Measurements from Volunteers and Deep Learning for 314,000 Galaxies}}, month = dec, year = 2020, publisher = {Zenodo}, version = {0.0.2}, doi = {10.5281/zenodo.4573248}, url = {https://doi.org/10.5281/zenodo.4573248} } ```
6,035
[ [ -0.0684814453125, -0.0217437744140625, 0.035430908203125, 0.00394439697265625, -0.039337158203125, -0.0088653564453125, 0.007495880126953125, -0.044677734375, 0.06988525390625, 0.04302978515625, -0.06329345703125, -0.0660400390625, -0.035430908203125, 0.0115...
ariesutiono/entailment-bank-v3
2022-08-21T06:05:29.000Z
[ "license:cc-by-4.0", "region:us" ]
ariesutiono
null
null
0
3
2022-08-21T05:48:22
--- license: cc-by-4.0 --- # Entailment bank dataset This dataset raw source can be found at [allenai's Github](https://github.com/allenai/entailment_bank/). If you use this dataset, it is best to cite the original paper ``` @article{entalmentbank2021, title={Explaining Answers with Entailment Trees}, author={Dalvi, Bhavana and Jansen, Peter and Tafjord, Oyvind and Xie, Zhengnan and Smith, Hannah and Pipatanangkura, Leighanna and Clark, Peter}, journal={EMNLP}, year={2021} } ```
499
[ [ -0.006664276123046875, -0.07275390625, 0.033111572265625, -0.007305145263671875, -0.004405975341796875, -0.01203155517578125, 0.01629638671875, -0.0266876220703125, 0.032135009765625, 0.06329345703125, -0.040863037109375, -0.0265045166015625, -0.02392578125, ...
merkalo-ziri/qa_shreded
2022-08-26T01:27:18.000Z
[ "task_categories:question-answering", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:rus", "license:other", "region:us" ]
merkalo-ziri
null
null
0
3
2022-08-26T01:25:51
--- annotations_creators: - found language: - rus language_creators: - found license: - other multilinguality: - monolingual pretty_name: qa_main size_categories: - 1K<n<10K source_datasets: - original tags: [] task_categories: - question-answering task_ids: [] --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
2,702
[ [ -0.03265380859375, -0.03472900390625, 0.00994110107421875, 0.0190277099609375, -0.01482391357421875, 0.016937255859375, -0.022979736328125, -0.025665283203125, 0.045867919921875, 0.044097900390625, -0.0626220703125, -0.083251953125, -0.051544189453125, 0.004...
BDas/EnglishNLPDataset
2022-08-27T11:13:01.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", ...
BDas
The dataset, prepared in English, includes 10.000 tests, 10.000 validations and 80000 train data. The data is composed of customer comments and created from e-commerce sites.
----EnglishNLPDataset----
0
3
2022-08-27T10:58:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - other multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification - multi-label-classification pretty_name: 'EnglishNLPDataset' --- # Dataset Card for "EnglishNLPDataset" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/BihterDass/EnglishTextClassificationDataset] - **Repository:** [https://github.com/BihterDass/EnglishTextClassificationDataset] - **Size of downloaded dataset files:** 8.71 MB - **Size of the generated dataset:** 8.71 MB ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] #### english-dataset-v1 - **Size of downloaded dataset files:** 8.71 MB - **Size of the generated dataset:** 8.71 MB ### Data Fields The data fields are the same among all splits. #### english-dataset-v-v1 - `text`: a `string` feature. - `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0). ### Data Splits | |train |validation|test | |----|--------:|---------:|---------:| |Data| 80000 | 10000 | 10000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset.
3,653
[ [ -0.03521728515625, -0.028076171875, -0.0093536376953125, 0.01788330078125, -0.01354217529296875, 0.006877899169921875, -0.035491943359375, -0.0367431640625, 0.037994384765625, 0.030181884765625, -0.045867919921875, -0.061279296875, -0.040069580078125, 0.0233...
sagawa/ord-uniq-canonicalized
2022-09-04T02:41:10.000Z
[ "task_categories:text2text-generation", "task_categories:translation", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "license:apache-2.0", "ord", "chemical", "reaction", "region:us" ]
sagawa
null
null
0
3
2022-09-03T04:28:23
--- annotations_creators: [] language_creators: [] license: - apache-2.0 multilinguality: - monolingual pretty_name: canonicalized ORD size_categories: - 1M<n<10M source_datasets: - original tags: - ord - chemical - reaction task_categories: - text2text-generation - translation task_ids: [] --- ### dataset description We downloaded open-reaction-database(ORD) dataset from [here](https://github.com/open-reaction-database/ord-data). As a preprocess, we removed overlapping data and canonicalized them using RDKit. We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit. ```python: from rdkit import Chem def canonicalize(mol): mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True) return mol ``` We randomly split the preprocessed data into train, validation and test. The ratio is 8:1:1.
859
[ [ -0.037628173828125, -0.034149169921875, 0.010345458984375, 0.00893402099609375, -0.01410675048828125, -0.004425048828125, -0.02032470703125, -0.002101898193359375, 0.029571533203125, 0.0223236083984375, -0.05810546875, -0.050140380859375, -0.00971221923828125, ...
batterydata/battery-device-data-qa
2022-09-05T15:54:40.000Z
[ "task_categories:question-answering", "language:en", "license:apache-2.0", "region:us" ]
batterydata
null
null
0
3
2022-09-05T15:30:32
--- language: - en license: - apache-2.0 task_categories: - question-answering pretty_name: 'Battery Device Question Answering Dataset' --- # Battery Device QA Data Battery device records, including anode, cathode, and electrolyte. Examples of the question answering evaluation dataset: \{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\} \{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\} \{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\} # Usage ``` from datasets import load_dataset dataset = load_dataset("batterydata/battery-device-data-qa") ``` # Citation ``` @article{huang2022batterybert, title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement}, author={Huang, Shu and Cole, Jacqueline M}, journal={J. Chem. Inf. Model.}, year={2022}, doi={10.1021/acs.jcim.2c00035}, url={DOI:10.1021/acs.jcim.2c00035}, pages={DOI: 10.1021/acs.jcim.2c00035}, publisher={ACS Publications} } ```
1,968
[ [ -0.0316162109375, -0.055450439453125, 0.039154052734375, -0.04107666015625, -0.0244140625, 0.0206146240234375, 0.0194854736328125, -0.007110595703125, 0.017669677734375, 0.0316162109375, -0.044677734375, -0.041656494140625, 0.0215606689453125, -0.00621032714...
Mijavier/donut_custom_dataset
2022-09-05T17:15:39.000Z
[ "region:us" ]
Mijavier
null
null
0
3
2022-09-05T17:07:15
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
priyank-m/IAM_words_text_recognition
2022-09-07T13:25:48.000Z
[ "region:us" ]
priyank-m
null
null
0
3
2022-09-07T13:10:31
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
nbroad/basic_text_dataset
2022-09-08T04:21:31.000Z
[ "region:us" ]
nbroad
null
null
0
3
2022-09-08T03:43:50
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Mandi/CourtSegmentation
2022-09-09T07:38:52.000Z
[ "region:us" ]
Mandi
null
null
0
3
2022-09-08T15:20:30
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
biu-nlp/Controlled-Text-Reduction-dataset
2022-10-25T13:25:49.000Z
[ "arxiv:2210.13449", "region:us" ]
biu-nlp
The dataset contains document-summary pairs with document spans (referred to as "highlights"), indicating the "pre-selected" spans that lead to the creation of the summary. The evaluation and test datasets were constructed via controlled crowdsourcing. The train datasets were automatically generated using the summary-source proposition-level alignment model SuperPAL (Ernst et al., 2021).
""" # _CITATION =
1
3
2022-09-11T09:44:55
# Controlled Text Reduction This dataset contains Controlled Text Reduction triplets - document-summary pairs, and the spans in the document that cover the summary. The task input is consists of a document with pre-selected spans in it ("highlights"). The output is a text covering all and only the highlighted content. The script downloads the data from the original [GitHub repository](https://github.com/lovodkin93/Controlled_Text_Reduction). ### Format The dataset contains the following important features: * `doc_text` - the input text. * `summary_text` - the output text. * `highlight_spans` - the spans in the input text (the doc_text) that lead to the output text (the summary_text). ```json {'doc_text': 'The motion picture industry\'s most coveted award...with 32.', 'summary_text': 'The Oscar, created 60 years ago by MGM...awarded person (32).', 'highlight_spans':'[[0, 48], [50, 55], [57, 81], [184, 247], ..., [953, 975], [1033, 1081]]'} ``` where for each document-summary pair, we save the spans in the input document that lead to the summary. Notice that the dataset consists of two subsets: 1. `DUC-2001-2002` - which is further divided into 3 splits (train, validation and test). 2. `CNN-DM` - which has a single split. Citation ======== If you find the Controlled Text Reduction dataset useful in your research, please cite the following paper: ``` @misc{https://doi.org/10.48550/arxiv.2210.13449, doi = {10.48550/ARXIV.2210.13449}, url = {https://arxiv.org/abs/2210.13449}, author = {Slobodkin, Aviv and Roit, Paul and Hirsch, Eran and Ernst, Ori and Dagan, Ido}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Controlled Text Reduction}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Zero v1.0 Universal} } ```
1,877
[ [ -0.04205322265625, -0.05157470703125, 0.0161895751953125, 0.01329803466796875, -0.0215606689453125, 0.009490966796875, -0.034393310546875, -0.004985809326171875, 0.0300750732421875, 0.055999755859375, -0.0479736328125, -0.0648193359375, -0.045623779296875, 0...
pbwt/all-thai
2022-09-13T09:36:32.000Z
[ "region:us" ]
pbwt
null
null
1
3
2022-09-13T07:43:56
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
allenai/cochrane_sparse_oracle
2022-11-24T14:54:01.000Z
[ "task_categories:summarization", "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "lang...
allenai
null
null
0
3
2022-09-14T13:16:16
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-MS^2 - extended|other-Cochrane task_categories: - summarization - text2text-generation paperswithcode_id: multi-document-summarization pretty_name: MSLR Shared Task --- This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `target` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7014 | 0.3841 | 0.3841 | 0.3841 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7226 | 0.4023 | 0.4023 | 0.4023 | Retrieval results on the `test` set: N/A. Test set is blind so we do not have any queries.
1,528
[ [ -0.00232696533203125, -0.0083465576171875, 0.0206756591796875, 0.0140838623046875, -0.01467132568359375, -0.0170745849609375, -0.0015268325805664062, -0.000690460205078125, 0.0308685302734375, 0.037506103515625, -0.043365478515625, -0.04180908203125, -0.05395507...
TheGreatRambler/mm2_world_levels
2022-11-11T08:03:22.000Z
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:multilingual", "license:cc-by-...
TheGreatRambler
null
null
1
3
2022-09-18T20:19:22
--- language: - multilingual license: - cc-by-nc-sa-4.0 multilinguality: - multilingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - other - object-detection - text-retrieval - token-classification - text-generation task_ids: [] pretty_name: Mario Maker 2 super world levels tags: - text-mining --- # Mario Maker 2 super world levels Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 super world levels dataset consists of 3.3 million super world levels from Nintendo's online service and adds onto `TheGreatRambler/mm2_world`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_world_levels", split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'data_id': 19170881, 'ninjis': 23 } ``` Each row is a level within a super world owned by player `pid` that is denoted by `data_id`. Each level contains some number of ninjis `ninjis`, a rough metric for their popularity. ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'data_id': 19170881, 'ninjis': 23 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of the user who created the super world with this level| |data_id|int|The data ID of the level| |ninjis|int|Number of ninjis shown on this level| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
2,278
[ [ -0.04364013671875, -0.037445068359375, 0.0168914794921875, 0.007080078125, -0.00446319580078125, 0.0025768280029296875, 0.01486968994140625, -0.04913330078125, 0.0243072509765625, 0.04840087890625, -0.07244873046875, -0.019775390625, -0.041015625, 0.01084136...
readerbench/squad_qgen
2022-09-22T11:51:14.000Z
[ "region:us" ]
readerbench
null
null
0
3
2022-09-22T11:39:35
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
ayesha08/pake-freelancer-dataset
2022-09-28T19:54:04.000Z
[ "region:us" ]
ayesha08
null
null
0
3
2022-09-28T18:42:15
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
privacy-tech-lab/ppAllTest
2022-09-30T21:48:55.000Z
[ "region:us" ]
privacy-tech-lab
null
null
0
3
2022-09-30T21:48:35
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
ihassan1/auditor-sentiment
2022-10-02T08:44:54.000Z
[ "task_categories:text-classification", "task_ids:sentiment-scoring", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "auditor", "financial", "sentiment", "markets", "region:us" ]
ihassan1
null
null
1
3
2022-10-01T15:10:00
--- annotations_creators: - expert-generated language: [] language_creators: - expert-generated license: [] multilinguality: - monolingual pretty_name: auditor-sentiment size_categories: [] source_datasets: [] tags: - auditor - financial - sentiment - markets task_categories: - text-classification task_ids: - sentiment-scoring --- # Dataset Card for Auditor Sentiment
369
[ [ -0.02252197265625, 0.03204345703125, -0.007396697998046875, 0.039276123046875, -0.060089111328125, -0.007343292236328125, 0.006320953369140625, 0.01209259033203125, 0.032257080078125, 0.0253143310546875, -0.019805908203125, -0.056884765625, -0.036102294921875, ...
Harsit/xnli2.0_french
2022-10-05T05:32:18.000Z
[ "region:us" ]
Harsit
null
null
0
3
2022-10-05T05:32:00
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Tidrael/tsl_news
2022-10-10T14:23:36.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0", "region:us" ]
Tidrael
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
@InProceedings{huggingface:dataset, title = {A great new dataset}, author={huggingface, Inc. }, year={2020} }
1
3
2022-10-06T04:47:14
--- annotations_creators: [] language: - en language_creators: - machine-generated license: - apache-2.0 multilinguality: - monolingual pretty_name: bussiness-news size_categories: - 1K<n<10K source_datasets: - original tags: [] task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Top news headline in finance from bbc-news ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Sentiment label: Using threshold below 0 is negative (0) and above 0 is positive (1) [More Information Needed] ### Data Splits Train/Split Ratio is 0.9/0.1 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
2,879
[ [ -0.03631591796875, -0.03680419921875, 0.005924224853515625, 0.0225372314453125, -0.019012451171875, 0.01544952392578125, -0.0251007080078125, -0.021453857421875, 0.0443115234375, 0.0401611328125, -0.06158447265625, -0.0838623046875, -0.051971435546875, 0.006...
arbml/darija
2022-11-03T15:07:13.000Z
[ "region:us" ]
arbml
null
null
0
3
2022-10-06T13:53:55
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
HuggingFaceM4/general-pmd-synthetic-testing
2022-10-07T03:12:13.000Z
[ "license:bigscience-openrail-m", "region:us" ]
HuggingFaceM4
This dataset is designed to be used in testing. It's derived from general-pmd-10k dataset
@InProceedings{huggingface:dataset, title = {Multimodal synthetic dataset for testing / general PMD}, author={HuggingFace, Inc.}, year={2022} }
0
3
2022-10-07T01:07:24
--- license: bigscience-openrail-m --- This dataset is designed to be used in testing. It's derived from general-pmd/localized_narratives__ADE20k dataset The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`. The `unique` ones ensure uniqueness across `text` entries. The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation. The default split is `100.unique` The full process of this dataset creation, including which records were used to build it, is documented inside [general-pmd-synthetic-testing.py](https://huggingface.co/datasets/HuggingFaceM4/general-pmd-synthetic-testing/blob/main/general-pmd-synthetic-testing.py)
854
[ [ -0.04339599609375, -0.057586669921875, 0.01132965087890625, 0.023773193359375, -0.01910400390625, -0.01215362548828125, 0.006237030029296875, 0.005985260009765625, 0.0158538818359375, 0.045867919921875, -0.071044921875, -0.04925537109375, -0.005100250244140625, ...
GuiGel/meddocan
2022-10-07T08:58:07.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:es", "license:cc-by-4.0", "clinical", "pr...
GuiGel
null
null
0
3
2022-10-07T06:31:03
--- annotations_creators: - expert-generated language: - es language_creators: - expert-generated license: - cc-by-4.0 multilinguality: - monolingual pretty_name: MEDDOCAN size_categories: - 10K<n<100K source_datasets: - original tags: - clinical - protected health information - health records task_categories: - token-classification task_ids: - named-entity-recognition --- # Dataset Card for "meddocan" ## Table of Contents - [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://temu.bsc.es/meddocan/index.php/datasets/](https://temu.bsc.es/meddocan/index.php/datasets/) - **Repository:** [https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN](https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN) - **Paper:** [http://ceur-ws.org/Vol-2421/MEDDOCAN_overview.pdf](http://ceur-ws.org/Vol-2421/MEDDOCAN_overview.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A personal upload of the SPACC_MEDDOCAN corpus. The tokenization is made with the help of a custom [spaCy](https://spacy.io/) pipeline. ### Supported Tasks and Leaderboards Name Entity Recognition ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |meddocan|10312|5268|5155| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information From the [SPACCC_MEDDOCAN: Spanish Clinical Case Corpus - Medical Document Anonymization](https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN) page: > This work is licensed under a Creative Commons Attribution 4.0 International License. > > You are free to: Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. > > For more information, please see https://creativecommons.org/licenses/by/4.0/ ### Citation Information ``` @inproceedings{Marimon2019AutomaticDO, title={Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results}, author={Montserrat Marimon and Aitor Gonzalez-Agirre and Ander Intxaurrondo and Heidy Rodriguez and Jose Lopez Martin and Marta Villegas and Martin Krallinger}, booktitle={IberLEF@SEPLN}, year={2019} } ``` ### Contributions Thanks to [@GuiGel](https://github.com/GuiGel) for adding this dataset.
6,109
[ [ -0.041595458984375, -0.037811279296875, 0.024261474609375, 0.0146484375, -0.0233154296875, 0.0076904296875, -0.027618408203125, -0.03167724609375, 0.055633544921875, 0.042877197265625, -0.0513916015625, -0.0804443359375, -0.04888916015625, 0.019622802734375,...
frankier/multiscale_rotten_tomatoes_critic_reviews
2022-11-04T12:09:34.000Z
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:sentiment-scoring", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:cc0-1.0", "reviews", "ratings", "ordinal", "text", "region:us" ]
frankier
null
null
0
3
2022-10-07T12:54:12
--- language: - en language_creators: - found license: cc0-1.0 multilinguality: - monolingual size_categories: - 100K<n<1M tags: - reviews - ratings - ordinal - text task_categories: - text-classification task_ids: - text-scoring - sentiment-scoring --- Cleaned up version of the rotten tomatoes critic reviews dataset. The original is obtained from Kaggle: https://www.kaggle.com/datasets/stefanoleone992/rotten-tomatoes-movies-and-critic-reviews-dataset Data has been scraped from the publicly available website https://www.rottentomatoes.com as of 2020-10-31. The clean up process drops anything without both a review and a rating, as well as standardising the ratings onto several integer, ordinal scales. Requires the `kaggle` library to be installed, and kaggle API keys passed through environment variables or in ~/.kaggle/kaggle.json. See [the Kaggle docs](https://www.kaggle.com/docs/api#authentication). A processed version is available at https://huggingface.co/datasets/frankier/processed_multiscale_rt_critics
1,028
[ [ -0.048980712890625, -0.033203125, 0.033935546875, -0.01194000244140625, -0.0237579345703125, 0.010406494140625, -0.01271820068359375, -0.032470703125, 0.041717529296875, 0.07098388671875, -0.0718994140625, -0.036285400390625, -0.031524658203125, 0.0078735351...
argilla/sentiment-banking
2022-10-07T13:22:00.000Z
[ "region:us" ]
argilla
null
null
0
3
2022-10-07T13:18:29
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
argilla/news
2022-10-07T13:23:10.000Z
[ "region:us" ]
argilla
null
null
0
3
2022-10-07T13:23:00
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
rjac/biobert-ner-diseases-dataset
2022-11-04T11:12:13.000Z
[ "region:us" ]
rjac
null
null
3
3
2022-10-08T15:34:44
--- dataset_info: features: - name: tokens sequence: string - name: tags sequence: class_label: names: 0: O 1: B-Disease 2: I-Disease id: - 0 - 1 - 2 - name: sentence_id dtype: string splits: - name: test num_bytes: 2614997 num_examples: 5737 - name: train num_bytes: 6947635 num_examples: 15488 download_size: 1508920 dataset_size: 9562632 --- # Dataset Card for "biobert-ner-diseases-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
649
[ [ -0.027557373046875, -0.01531982421875, 0.0184478759765625, 0.007659912109375, -0.0131988525390625, 0.00957489013671875, 0.031982421875, -0.01352691650390625, 0.083984375, 0.034881591796875, -0.049591064453125, -0.07135009765625, -0.046478271484375, -0.003858...
autoevaluate/autoeval-eval-KETI-AIR__korquad-v1.0-acb0d1-1711659840
2022-10-10T12:25:13.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
0
3
2022-10-10T11:38:00
--- type: predictions tags: - autotrain - evaluation datasets: - KETI-AIR/korquad eval_info: task: extractive_question_answering model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt metrics: ['angelina-wang/directional_bias_amplification'] dataset_name: KETI-AIR/korquad dataset_config: v1.0 dataset_split: train col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt * Dataset: KETI-AIR/korquad * Config: v1.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@HANSOLYOO](https://huggingface.co/HANSOLYOO) for evaluating this model.
1,028
[ [ -0.039154052734375, -0.037628173828125, 0.0177154541015625, 0.01055908203125, -0.0008573532104492188, 0.0022678375244140625, 0.0084381103515625, -0.03680419921875, 0.005962371826171875, 0.0296173095703125, -0.0863037109375, 0.001983642578125, -0.0406494140625, ...
umair894/rvl_cdip_300_examples_per_class_donut
2022-10-11T07:12:23.000Z
[ "region:us" ]
umair894
null
null
0
3
2022-10-11T07:12:17
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
krm/for-ULPGL-Dissertation
2022-10-16T07:53:00.000Z
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|orange_sum", "language:fr", "license:other", "krm", "ulpgl", "orange", "reg...
krm
null
null
0
3
2022-10-13T11:01:24
--- annotations_creators: - other language: - fr language_creators: - other license: - other multilinguality: - monolingual pretty_name: for-ULPGL-Dissertation size_categories: - 10K<n<100K source_datasets: - extended|orange_sum tags: - krm - ulpgl - orange task_categories: - summarization task_ids: - news-articles-summarization --- # Dataset Card for [for-ULPGL-Dissertation] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** krm/for-ULPGL-Dissertation - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Ce dataset est essentiellement basé sur le dataset *GEM/Orange_sum* dédié à la synthèse d'articles en français. Il est constitué des données abstract de ce dataset (Orange_sum) auxquelles a été ajouté un certain nombre de synthèses générées par le système **Mon Résumeur** de **David Krame**. ### Supported Tasks and Leaderboards Synthèse automatique ### Languages Français ## Dataset Structure ### Data Fields *summary* et *text* sont les champs du dataset avec : **text** contient les textes et **summary** les synthèses correspondantes. ### Data Splits Pour le moment (le 16 Octobre 2022), le dataset est constitué de : > **21721** données d'entraînement (split dénommé **train**) > **1545** données de validation (split dénommé **validation**) > **1581** données de test (split dénommé **test**) ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions
3,286
[ [ -0.02911376953125, -0.026275634765625, 0.0196685791015625, 0.020050048828125, -0.005413055419921875, 0.002506256103515625, -0.018829345703125, -0.00768280029296875, 0.0190277099609375, 0.03460693359375, -0.043701171875, -0.07891845703125, -0.033355712890625, ...
jamescalam/channel-metadata
2022-10-26T01:05:55.000Z
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:afl-3.0", "youtube", "video", "video metadata", "tech", "science and tech", "region:us"...
jamescalam
null
null
1
3
2022-10-14T05:29:45
--- annotations_creators: - no-annotation language: - en language_creators: - found license: - afl-3.0 multilinguality: - monolingual pretty_name: Tech Channels Metadata size_categories: - 10K<n<100K source_datasets: - original tags: - youtube - video - video metadata - tech - science and tech task_categories: - other task_ids: [] --- Dataset containing video metadata from a few tech channels, i.e. * [James Briggs](https://youtube.com/c/JamesBriggs) * [Yannic Kilcher](https://www.youtube.com/c/YannicKilcher) * [sentdex](https://www.youtube.com/c/sentdex) * [Daniel Bourke](https://www.youtube.com/channel/UCr8O8l5cCX85Oem1d18EezQ) * [AI Coffee Break with Letitia](https://www.youtube.com/c/AICoffeeBreak) * [Alex Ziskind](https://youtube.com/channel/UCajiMK_CY9icRhLepS8_3ug)
783
[ [ -0.040771484375, -0.031951904296875, 0.0192108154296875, 0.004825592041015625, -0.001583099365234375, 0.005931854248046875, -0.007518768310546875, 0.0382080078125, 0.04541015625, 0.043182373046875, -0.08184814453125, -0.05462646484375, -0.062103271484375, 0....
cjvt/sloie
2022-10-21T07:36:18.000Z
[ "task_categories:text-classification", "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "language:sl", "license:cc-by-nc-sa-4.0", "idiom-detection", ...
cjvt
SloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29,400 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the Slovene Lexical Database (http://hdl.handle.net/11356/1030). Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus.
@article{skvorc2022mice, title = {MICE: Mining Idioms with Contextual Embeddings}, journal = {Knowledge-Based Systems}, volume = {235}, pages = {107606}, year = {2022}, issn = {0950-7051}, doi = {https://doi.org/10.1016/j.knosys.2021.107606}, url = {https://www.sciencedirect.com/science/article/pii/S0950705121008686}, author = {{\v S}kvorc, Tadej and Gantar, Polona and Robnik-{\v S}ikonja, Marko}, }
0
3
2022-10-17T12:55:41
--- annotations_creators: - expert-generated language_creators: - found language: - sl license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K - 100K<n<1M source_datasets: [] task_categories: - text-classification - token-classification task_ids: [] pretty_name: Dataset of Slovene idiomatic expressions SloIE tags: - idiom-detection - multiword-expression-detection --- # Dataset Card for SloIE ### Dataset Summary SloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29399 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the [Slovene Lexical Database]( (http://hdl.handle.net/11356/1030). Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus. For a more detailed description of the dataset, please see the paper Škvorc et al. (2022) - see below. ### Supported Tasks and Leaderboards Idiom detection. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ```json { 'sentence': 'Fantje regljajo v enem kotu, deklice pa svoje obrazke barvajo s pisanimi barvami.', 'expression': 'barvati kaj s črnimi barvami', 'word_order': [11, 10, 12, 13, 14], 'sentence_words': ['Fantje', 'regljajo', 'v', 'enem', 'kotu,', 'deklice', 'pa', 'svoje', 'obrazke', 'barvajo', 's', 'pisanimi', 'barvami.'], 'is_idiom': ['*', '*', '*', '*', '*', '*', '*', '*', 'NE', 'NE', 'NE', 'NE', 'NE'] } ``` In this `sentence`, the words of the expression "barvati kaj s črnimi barvami" are used in a literal sense, as indicated by the "NE" annotations inside `is_idiom`. The "*" annotations indicate the words are not part of the expression. ### Data Fields - `sentence`: raw sentence in string form - **WARNING**: this is at times slightly different from the words inside `sentence_words` (e.g., "..." here could be "." in `sentence_words`); - `expression`: the annotated idiomatic expression; - `word_order`: numbers indicating the positions of tokens that belong to the expression; - `sentence_words`: words in the sentence; - `is_idiom`: a string denoting whether each word has an idiomatic (`"DA"`), literal (`"NE"`), or ambiguous (`"NEJASEN ZGLED"`) meaning. `"*"` means that the word is not part of the expression. ## Additional Information ### Dataset Curators Tadej Škvorc, Polona Gantar, Marko Robnik-Šikonja. ### Licensing Information CC BY-NC-SA 4.0. ### Citation Information ``` @article{skvorc2022mice, title = {MICE: Mining Idioms with Contextual Embeddings}, journal = {Knowledge-Based Systems}, volume = {235}, pages = {107606}, year = {2022}, doi = {https://doi.org/10.1016/j.knosys.2021.107606}, url = {https://www.sciencedirect.com/science/article/pii/S0950705121008686}, author = {{\v S}kvorc, Tadej and Gantar, Polona and Robnik-{\v S}ikonja, Marko}, } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
3,165
[ [ -0.02020263671875, -0.048736572265625, 0.01739501953125, 0.040130615234375, -0.053680419921875, -0.01055908203125, -0.0138702392578125, -0.0239410400390625, 0.032135009765625, 0.031707763671875, -0.049591064453125, -0.06353759765625, -0.060089111328125, 0.02...
pachi107/autotrain-data-ethos-sentiments
2022-10-17T16:28:44.000Z
[ "task_categories:text-classification", "language:en", "region:us" ]
pachi107
null
null
0
3
2022-10-17T16:28:10
--- language: - en task_categories: - text-classification --- # AutoTrain Dataset for project: ethos-sentiments ## Dataset Description This dataset has been automatically processed by AutoTrain for project ethos-sentiments. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "well it's clear now why europeans can't differ niggers and shitskins from human", "target": 0 }, { "text": "These boys will then grow up with people making fun of them and they will then hate their parents for ruining their lives.", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=2, names=['hate_speech', 'no_hate_speech'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 798 | | valid | 200 |
1,149
[ [ -0.04742431640625, -0.00853729248046875, 0.00044274330139160156, 0.0252532958984375, -0.0180816650390625, 0.0236358642578125, -0.02435302734375, -0.036712646484375, 0.0084991455078125, 0.005741119384765625, -0.042724609375, -0.04803466796875, -0.048583984375, ...